Two of the most influential leaders in AI are warning that the technology’s next phase could concentrate unprecedented intellectual power inside data centers, raising profound geopolitical and societal questions.
At the AI Summit Impact in India, OpenAI CEO Sam Altman outlines a future in which machine intelligence may eclipse human capacity on a global scale.
He says the projection is uncertain but urgent enough to demand serious debate.
“If we are right, by the end of 2028, more of the world’s intellectual capacity could reside inside of data centers than outside of them. This is an extraordinary statement to make, and of course we could be wrong, but I think it really bears serious consideration…
A superintelligence, at some point on its development curve, would be capable of doing a better job being the CEO of a major company than any executive, certainly me, or doing better research than our best scientists.”
The OpenAI chief executive says society must have a stake in shaping the outcome of superintelligence before the technology falls in the wrong hands.
“The future of AI is not going to unfold exactly like anyone predicts… We don’t yet know how to think about some superintelligence being aligned with dictators in totalitarian countries. We don’t know how to think about countries using AI to fight new kinds of war with each other. We don’t know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it’s important to have more understanding and society-wide debate before we’re all surprised.”
At the same event, Anthropic CEO Dario Amodei outlines a similar trajectory, using a different metaphor to explain the scale of what may be coming.
He says the next generation of systems will be coordinated intelligence operating at speeds beyond human capability.
“We’re increasingly close to what I’ve called a country of geniuses in a data center, a set of AI agents that are more capable than most humans at most things and can coordinate at superhuman speed. That level of capability is something the world has never seen before and brings a very wide range of both opportunities and concerns for humanity.”
Amodei lays out both the transformative upside and the systemic risks, while echoing Altman’s concerns.
“On the positive side, we have the potential to cure diseases that have been incurable for thousands of years, to radically improve human health, and to lift billions out of poverty, including the global South, and create a better world for everyone.
On the side of risks, I’m concerned about the autonomous behavior of AI models, their potential for misuse by individuals and governments, and their potential for economic displacement.”
Both executives framed the coming wave of AI capability as neither purely utopian nor purely dystopian, but as a force powerful enough to reshape economic systems, political power structures, and global stability if left unchecked or misunderstood.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

