Anthropic co-founder Jack Clark says concerns about artificial intelligence accelerating beyond human control are no longer confined to science fiction.
In a recent Google Doc discussion shared on Substack, Clark lays out what he sees as the most consequential long-term risk in AI development: systems that can meaningfully improve themselves.
Clark says the issue revolves around what researchers sometimes refer to as “closing the loop” on AI research, where AI systems increasingly handle the task of building better AI. According to the Anthropic executive, the threat of self-improving AI is not imminent, but the early warning signs are already emerging.
“To be clear, I assign essentially zero likelihood to there being recursively self-improving AI systems on the planet in January 2026. We do see extremely early signs of AI getting better at doing components of AI research, ranging from kernel development to autonomously fine-tuning open-weight models.”
Clark warns that if those capabilities continue to improve and eventually converge into systems that can significantly redesign or improve themselves, the pace of AI development would change dramatically. The shift would make progress more difficult for humans to track or fully comprehend.
“If you end up building an AI system that can build itself, then AI development would speed up very dramatically and probably become harder for people to understand.”
According to Clark, a self-improving AI would raise major policy challenges and could trigger an unprecedented surge in global economic activity driven primarily by machine intelligence.
“This would pose a range of significant policy issues and would also likely lead to an unprecedented step change in the economic activity of the world.”
Clark says governments must take action and demand full transparency from big AI firms now instead of reacting to the issue once the tech has arrived.
“Put another way, if I had five minutes with a policymaker, I’d basically say to them, ‘Self-improving AI sounds like science fiction, but there’s nothing in the technology that says it’s impossible, and if it happened, it’d be a huge deal, and you should pay attention to it. You should demand transparency from AI companies about exactly what they’re seeing here, and make sure you have third parties you trust who can test out AI systems for these properties.'”
Late in December, CEO Sam Altman published a job ad for a Head of Preparedness role at OpenAI, noting that the firm is seeing models capable of finding critical security vulnerabilities.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

