Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Wednesday, February 25
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Anthropic Co-Founder Warns Society Is Not Ready for Self-Improving AI, Says Early Warning Signs Are Appearing

    Anthropic Co-Founder Warns Society Is Not Ready for Self-Improving AI, Says Early Warning Signs Are Appearing

    By Henry KanapiJanuary 11, 20263 Mins Read
    Share
    Twitter LinkedIn

    Anthropic co-founder Jack Clark says concerns about artificial intelligence accelerating beyond human control are no longer confined to science fiction.

    In a recent Google Doc discussion shared on Substack, Clark lays out what he sees as the most consequential long-term risk in AI development: systems that can meaningfully improve themselves.

    Clark says the issue revolves around what researchers sometimes refer to as “closing the loop” on AI research, where AI systems increasingly handle the task of building better AI. According to the Anthropic executive, the threat of self-improving AI is not imminent, but the early warning signs are already emerging.

    “To be clear, I assign essentially zero likelihood to there being recursively self-improving AI systems on the planet in January 2026. We do see extremely early signs of AI getting better at doing components of AI research, ranging from kernel development to autonomously fine-tuning open-weight models.”

    Clark warns that if those capabilities continue to improve and eventually converge into systems that can significantly redesign or improve themselves, the pace of AI development would change dramatically. The shift would make progress more difficult for humans to track or fully comprehend.

    “If you end up building an AI system that can build itself, then AI development would speed up very dramatically and probably become harder for people to understand.”

    According to Clark, a self-improving AI would raise major policy challenges and could trigger an unprecedented surge in global economic activity driven primarily by machine intelligence.

    “This would pose a range of significant policy issues and would also likely lead to an unprecedented step change in the economic activity of the world.”

    Clark says governments must take action and demand full transparency from big AI firms now instead of reacting to the issue once the tech has arrived.

    “Put another way, if I had five minutes with a policymaker, I’d basically say to them, ‘Self-improving AI sounds like science fiction, but there’s nothing in the technology that says it’s impossible, and if it happened, it’d be a huge deal, and you should pay attention to it. You should demand transparency from AI companies about exactly what they’re seeing here, and make sure you have third parties you trust who can test out AI systems for these properties.'” 

    Late in December, CEO Sam Altman published a job ad for a Head of Preparedness role at OpenAI, noting that the firm is seeing models capable of finding critical security vulnerabilities.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Anthropic Jack Clark regulation Self-improving AI
    Previous ArticleBillionaire Chamath Palihapitiya Says SpaceX Will Never IPO and Could Reverse Merge Into Tesla
    Next Article China’s Top AI Leaders Say Odds of Beating the US in Artificial Intelligence Race Are Below 20%: Report

    Read More

    Defense Secretary Pete Hegseth Gives Anthropic Until 5:00 PM Friday To Accept Pentagon Terms: Report

    February 25, 2026

    IBM Plunges 13.15% After Anthropic Says Claude Can Modernize COBOL in ‘Quarters Instead of Years’

    February 24, 2026

    Pentagon Summons Anthropic CEO Dario Amodei Amid Push To Loosen AI Guardrails: Report

    February 24, 2026

    Elon Musk Fires Back at Anthropic After Startup Alleges ‘Distillation Attacks’ by Chinese Labs

    February 24, 2026

    Ex-OpenAI Researcher Warns Advertising Could Create Dangerous Incentives for AI Platforms

    February 23, 2026

    AI Could Deliver ‘10,000 Years of Progress’ in Just 25 Years, According to METR Researcher

    February 23, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.