Wall Street Rotates Into Memory and Fiber as AI Infrastructure Broadens Beyond GPUs
Micron and Corning surge on data center component shortages while Nvidia's dominance fades. Investors bet the AI buildout requires a wider hardware stack than GPUs alone.

Wall Street is redistributing capital across the AI hardware stack, propelling legacy chipmakers and component suppliers past the graphics processor manufacturers that dominated the sector's early phase. Intel has gained more than 200% in 2026, while Micron and AMD have each more than doubled. Nvidia, by contrast, has risen just 16% for the year, only slightly ahead of the Nasdaq composite.
The shift reflects a structural bet that data centers require a broader array of advanced components than the GPU-centric architectures that powered the first wave of generative AI deployments. Memory has emerged as the most acute bottleneck, with a global shortage driving prices higher and turning Micron into one of the market's most actively traded names over the past twelve months. Corning, a fiber-optic cable manufacturer, is posting historic gains as new data center builds demand high-bandwidth interconnects.
CPU demand is accelerating as workloads migrate from chatbots to AI agents, which require more orchestration and decision logic than pure inference tasks. Bank of America estimates the data center CPU market will expand from $27 billion in 2025 to $60 billion by 2030. AMD reported quarterly results that sailed past estimates on data center strength, and CEO Lisa Su raised the company's server CPU growth forecast from 18% to 35% over the next three to five years.
(The rotation comes as enterprises distribute AI workloads across on-premises, public cloud, and colocation sites to avoid single-vendor lock-in. A recent survey found 86% of organizations now operate hybrid infrastructure, with 77% prioritizing inference over training and managing an average of seven different models simultaneously.)
Nvidia's early lead in AI infrastructure was built on CUDA software lock-in and a GPU architecture optimized for parallel training workloads. That advantage remains intact for large-scale model training, but the inference and agent deployment phases favor a more heterogeneous hardware mix. Intel and AMD are positioned to capture CPU orchestration tasks, while memory makers benefit from the bandwidth requirements of multi-model deployments. Telecommunications providers are exploring roles as orchestration partners, leveraging experience in intelligent routing and multi-tenant network security to help enterprises manage distributed AI stacks.
The market's broadening reflects a maturation of AI infrastructure from a GPU-centric buildout to a full-stack deployment model. Component shortages in memory and fiber suggest supply chains have not yet caught up to the scale of planned data center construction, creating pricing power for suppliers outside the GPU oligopoly. Whether this rotation proves durable will depend on whether agent workloads continue to grow faster than training demand, and whether enterprises sustain hybrid deployment strategies rather than consolidating around a single cloud provider.
Keywords
Sources
https://www.cnbc.com/2026/05/08/wall-street-ai-chip-love-moves-from-nvidia-to-intel-amd-and-micron.html
Frames the shift as a 'changing of the guard' driven by CPU demand for AI agents and memory shortages benefiting Micron and Corning.
https://www.fool.com/investing/2026/05/10/the-best-under-the-radar-ai-stocks-to-buy-in-2026/
Highlights Dell's AI Factory and Astera Labs as integration plays connecting legacy hardware with newer AI components.
https://www.rcrwireless.com/20260507/ai/f5-ai-inference-in-house
Emphasizes telcos positioning as orchestration partners for hybrid AI infrastructure, citing 86% of orgs using distributed deployments.
https://hackaday.com/2026/05/09/getting-a-proprietary-bus-gpu-onto-pcie-enables-cheaper-local-llms-for-now/
Explores arbitrage opportunities in proprietary GPUs adapted for local AI, signaling market inefficiencies as infrastructure diversifies.
