Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Sunday, January 11
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Anthropic Co-Founder Warns Society Is Not Ready for Self-Improving AI, Says Early Warning Signs Are Appearing

    Anthropic Co-Founder Warns Society Is Not Ready for Self-Improving AI, Says Early Warning Signs Are Appearing

    By Henry KanapiJanuary 11, 20263 Mins Read
    Share
    Twitter LinkedIn

    Anthropic co-founder Jack Clark says concerns about artificial intelligence accelerating beyond human control are no longer confined to science fiction.

    In a recent Google Doc discussion shared on Substack, Clark lays out what he sees as the most consequential long-term risk in AI development: systems that can meaningfully improve themselves.

    Clark says the issue revolves around what researchers sometimes refer to as “closing the loop” on AI research, where AI systems increasingly handle the task of building better AI. According to the Anthropic executive, the threat of self-improving AI is not imminent, but the early warning signs are already emerging.

    “To be clear, I assign essentially zero likelihood to there being recursively self-improving AI systems on the planet in January 2026. We do see extremely early signs of AI getting better at doing components of AI research, ranging from kernel development to autonomously fine-tuning open-weight models.”

    Clark warns that if those capabilities continue to improve and eventually converge into systems that can significantly redesign or improve themselves, the pace of AI development would change dramatically. The shift would make progress more difficult for humans to track or fully comprehend.

    “If you end up building an AI system that can build itself, then AI development would speed up very dramatically and probably become harder for people to understand.”

    According to Clark, a self-improving AI would raise major policy challenges and could trigger an unprecedented surge in global economic activity driven primarily by machine intelligence.

    “This would pose a range of significant policy issues and would also likely lead to an unprecedented step change in the economic activity of the world.”

    Clark says governments must take action and demand full transparency from big AI firms now instead of reacting to the issue once the tech has arrived.

    “Put another way, if I had five minutes with a policymaker, I’d basically say to them, ‘Self-improving AI sounds like science fiction, but there’s nothing in the technology that says it’s impossible, and if it happened, it’d be a huge deal, and you should pay attention to it. You should demand transparency from AI companies about exactly what they’re seeing here, and make sure you have third parties you trust who can test out AI systems for these properties.'” 

    Late in December, CEO Sam Altman published a job ad for a Head of Preparedness role at OpenAI, noting that the firm is seeing models capable of finding critical security vulnerabilities.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Anthropic Jack Clark regulation Self-improving AI
    Previous ArticleBillionaire Chamath Palihapitiya Says SpaceX Will Never IPO and Could Reverse Merge Into Tesla
    Next Article China’s Top AI Leaders Say Odds of Beating the US in Artificial Intelligence Race Are Below 20%: Report

    Read More

    China’s Top AI Leaders Say Odds of Beating the US in Artificial Intelligence Race Are Below 20%: Report

    January 11, 2026

    Billionaire Chamath Palihapitiya Says SpaceX Will Never IPO and Could Reverse Merge Into Tesla

    January 11, 2026

    Michael Burry Calls for $1,000,000,000,000 Nuclear Buildout To Power America and Keep Up With China

    January 11, 2026

    Elon Musk Pushes Back After UK Warns It Could Block X, Says Regulators Are Using AI as a Censorship Pretext

    January 10, 2026

    Justice Department Forms AI Task Force To Challenge State Laws Seen as Blocking Innovation: Report

    January 10, 2026

    Billionaire Reid Hoffman Says Generative AI Will Spark ‘Creative Addiction’ That Unlocks People’s Super Agency

    January 10, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.