A pioneer of modern AI says the world may be racing toward a form of machine intelligence humanity is not prepared to control.
Speaking in a new Bloomberg interview, Geoffrey Hinton – often called the “Godfather of AI” for his foundational work in neural networks – says the industry’s rapid progress and competitive dynamics are pushing developers toward systems that could surpass human capabilities.
He says Big Tech is creating an alien race that has the potential to be far more intelligent than humans, and they are coming within a decade.
“Suppose that some telescope had seen an alien invasion fleet that was going to get here in about 10 years. We would be scared and we were doing stuff about it. Well, that’s what we have. We’re constructing these aliens, but they’re going to get here in about 10 years, and they’re going to be smarter than us. We should be thinking very, very hard.”
He adds that some leading AI labs take the threat seriously, but he believes commercial pressure is overwhelming caution.
“Yes, I think both Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) and also Jeff Dean (Google), they all take safety fairly seriously. Obviously, they’re involved in a big commercial competition too, so it’s difficult, but they all understand the existential threat that when AI gets super intelligent, it might just replace us. So they worry about it a bit. I think that some companies are less responsible than others.”
Hinton singles out Meta and OpenAI, claiming the industry’s priorities have shifted from responsible research to speed and market dominance.
“So, for example, I think Meta isn’t particularly responsible. OpenAI was founded to be responsible about this, but it gets less responsible every day, and their best safety researchers are all leaving, or have left.”
He argues the competitive mindset misunderstands the fundamental power imbalance that could emerge as systems surpass human capability.
“So their basic model is, I’m the CEO. And this super-intelligent AI is the extremely smart executive assistant… It’s not going to be like that when it’s smarter than us and more powerful than us. That’s just the wrong model, I believe.”
Hinton offers a biological metaphor instead, saying humanity may need to accept a subordinate role to coexist safely with more advanced beings.
“We need to look around and say, is there any model where a less intelligent thing controls a more intelligent thing? And we have one model of that… a baby controlling a mother. Evolution put lots of work into allowing the baby to control the mother… That seems a much more plausible model of how to coexist with the superintelligence.”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

