
Raspberry Pi hikes board prices in UK as global RAM shortage tightens
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Valve signals Steam Deck OLED shortages as global RAM market tightens
Valve warned that availability of the Steam Deck OLED will be intermittent in some markets as memory and storage allocations tighten under surging AI datacenter demand. Suppliers’ prioritization of large hyperscale orders — a dynamic industry executives and chip vendors now say could persist for years — is forcing OEMs to reassess launch timing, pricing, and BOM choices.
AI-driven memory squeeze reshapes GPU and storage markets as prices surge
A surge in demand for memory driven by AI workloads has pushed standalone RAM prices up several hundred percent, and signs now show those costs bleeding into GPUs and high-capacity storage. Manufacturers are reallocating scarce memory to higher-margin products, forcing lineup changes, higher street prices for certain GPUs, and a wider cascade of pricing pressure across components.

Dell Technologies Warns Memory Shortage Threatens U.S. AI Scale
Dell executives say constrained memory capacity is the primary bottleneck slowing national AI deployment and urge regulators to avoid new barriers; industry signals from Intel, Samsung and others suggest the shortfall may persist for multiple years and will shift supply toward AI‑optimized DRAM and HBM. The combined effect: higher prices, allocation-driven product choices, and a scramble for both hardware capacity and software memory-efficiency techniques to sustain large-scale AI workloads.

Intel warns memory shortage will persist through 2028
Intel’s CEO says global memory shortages will likely last until 2028, and rising AI-driven demand is already provoking supplier reallocations that squeeze consumer and midrange products. The combination of prolonged tightness and targeted wafer starts for high‑performance DRAM and HBM will keep prices elevated and complicate procurement for OEMs, cloud operators and smaller system integrators.

IDC: Memory Shortage to Shrink Smartphone Market by 12.9% in 2026
Research firm IDC now forecasts a 12.9% fall in smartphone volumes in 2026 driven by an acute memory supply pinch that reallocates advanced chips to data centers. Industry signals — including Qualcomm's guidance cut, supplier road‑map shifts toward HBM/AI DRAM, and Intel's warning that tightness could last into 2028 — suggest the squeeze may be both immediate and multi‑year, pressuring OEM revenue and product cadence.

Cloud giants' hardware binge tightens markets and nudges users toward rented AI compute
Major cloud providers are concentrating purchases of GPUs, high-density DRAM and related components to support AI workloads, creating retail shortages and higher prices that push smaller buyers toward rented compute. Rapid datacenter buildouts, permitting and power constraints, and changes in supplier allocation and financing compound the risk that scarcity will be monetized into long-term service revenue and reduced market choice.

Micron and Memory Makers Reprice Markets as Hyperscalers Lock Supply
Hyperscalers are signing multi‑year memory contracts that have sent memory equities sharply higher and drained spot inventory; the squeeze is broadening from datacenter modules into retail RAM, SSDs and GPUs, and analysts differ on whether relief comes in 2027 or extends into 2028. The shift reallocates wafer starts and qualification lanes toward HBM and AI‑optimized DRAM, advantaging large buyers and producers while pressuring OEMs, smaller clouds and consumer device timelines.

Memory, Not Just GPUs: DRAM Spike Forces New AI Cost Playbook
A roughly 7x surge in DRAM spot prices has pushed memory from a secondary expense to a primary cost lever for AI inference. Combined hardware allocation shifts by chipmakers and emerging software patterns—like prompt-cache tiers, observational memory, and techniques such as Nvidia’s Dynamic Memory Sparsification—mean teams must pair procurement strategy with cache orchestration to control per-inference spend.