Famed short-seller Michael Burry warns that the economics behind the US AI data center boom may be about to change.
In a new post on X, the Big Short investor highlights reports surrounding the upcoming launch of DeepSeek V4, a Chinese model said to rival Western systems at a fraction of the cost.
Burry says the moment is a major structural shift rather than a single product release, suggesting that falling model costs and performance convergence could pressure the entire AI infrastructure trade.
“Convergence, commoditization, compression and reflexivity – the four horsemen of the data center buildout apocalypse. DeepSeek V4 launches mid-February 2026, 1 trillion parameters, 1 million token context, 3 architectural innovations at 10-40x lower than Western Comps NVDA.”
Based on Burry’s post, convergence appears to refer to the merging of AI tech, which reduces the need for specialized silicon. Commoditization suggests that chips will be standardized and cheap, while compression refers to efficient models that lower compute demand.
Burry ties all four horsemen to reflexivity, saying that if AI compresses margins, commoditizes compute, and weakens the need for incremental human knowledge creation, then the financial foundation under the CapEx boom could crack.
“Reflexivity in that human knowledge is insufficient to justify planned compute expenditure, AI itself reduces the dynamism of new human knowledge creation, making human knowledge insufficient…etc . Jevons paradox does not apply to the compute buildout, and the singularity will not have a financial sponsor or physical proof of economic demand.”
The investor’s comments follow an Introl blog post outlining DeepSeek V4’s reported specifications and ambitions. According to the write-up, the model is slated to launch in mid-February 2026 with 1 trillion total parameters and a 1-million-token context window.
Introl says DeepSeek V4 introduces three architectural changes: Manifold-Constrained Hyper-Connections, Engram conditional memory, and Sparse Attention. It cites internal benchmarks that claim more than 80% performance on SWE-bench, with inference costs estimated at 10 to 40x lower than comparable Western systems.
If those claims hold, DeepSeek V4 could deliver near-frontier coding performance at a fraction of the cost, threatening Western model pricing power and compressing margins across the AI stack.
In short, it signals a potential shift from AI scarcity to AI commoditization, where efficiency and cost, not just scale, redefine the competitive edge.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

