Anthropic CEO Dario Amodei warns that there’s a non-zero chance that model developers could give birth to a dangerous rogue AI model, highlighting that nobody fully knows how to control the technology.
In a new NBC interview, Amodei admits that even leading AI labs do not have complete control over the systems they build, warning that uneven standards across the industry could allow dangerous models to slip through.
“I think to be fair, none of us fully know how to control AI systems. I can’t tell you there’s a 100% chance that even the systems we build are perfectly reliable. We do everything we can to make them more reliable every day. We run tests, and we advocate for the regulation of the technology. I think we do pretty well, but even we can’t guarantee that everything is perfectly safe.”
While Amodei says Anthropic is trying to set the standards for AI safety, he notes that there are players in the race that don’t take the risk of a misaligned model seriously.
“I do worry with some of the others that the standard is lower, [that there is] a wide variety of levels of responsibility in some of the players. Some of the things that Google does around the biological risks of the models, I think, are also fairly responsible.
But I think the problem is that when you have a lot of players, the dangers are set by the least responsible players. Even if there’s one, two other responsible players, I think what you can’t deny is that there are some players out there who are not responsible.”
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

