Google's Memory Compression Breakthrough Rattles AI Hardware Investment Thesis
New algorithms that slash AI memory needs by six-fold trigger sell-off in chip stocks, forcing Wall Street to reconsider whether software efficiency will capture value once reserved for hardware.

Google Research's release of three compression algorithms in March has triggered a sharp reassessment of the artificial intelligence infrastructure investment thesis, sending memory and storage stocks lower as investors confront the possibility that software optimization may claim a larger share of AI economics than previously anticipated.
The algorithms—TurboQuant, PolarQuant, and Quantized Johnson-Lindenstrauss—are designed to reduce the memory overhead required to run large language models and vector search systems. In Google's testing, TurboQuant cut key-value cache memory requirements by at least six times without sacrificing accuracy, according to disclosures from the company's research division.
Shares of Micron, Western Digital, Seagate, and SanDisk declined following the announcement as market participants began questioning assumptions about AI-driven memory demand growth. The sell-off reflects a broader debate over whether the next phase of AI development will continue to reward hardware suppliers or shift value toward companies that make existing infrastructure more efficient through compression, routing optimization, and lower-cost inference.
The market reaction stands in contrast to ongoing capital commitments in the memory sector. SanDisk separately announced an investment in Nanya to secure long-term DRAM supply, signaling continued confidence in sustained AI hardware demand despite the efficiency gains demonstrated by Google's research.
