Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Saturday, April 18
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Anthropic Co-Founder Warns Society Is Not Ready for Self-Improving AI, Says Early Warning Signs Are Appearing

    Anthropic Co-Founder Warns Society Is Not Ready for Self-Improving AI, Says Early Warning Signs Are Appearing

    By Henry KanapiJanuary 11, 20263 Mins Read
    Share
    Twitter LinkedIn

    Anthropic co-founder Jack Clark says concerns about artificial intelligence accelerating beyond human control are no longer confined to science fiction.

    In a recent Google Doc discussion shared on Substack, Clark lays out what he sees as the most consequential long-term risk in AI development: systems that can meaningfully improve themselves.

    Clark says the issue revolves around what researchers sometimes refer to as “closing the loop” on AI research, where AI systems increasingly handle the task of building better AI. According to the Anthropic executive, the threat of self-improving AI is not imminent, but the early warning signs are already emerging.

    “To be clear, I assign essentially zero likelihood to there being recursively self-improving AI systems on the planet in January 2026. We do see extremely early signs of AI getting better at doing components of AI research, ranging from kernel development to autonomously fine-tuning open-weight models.”

    Clark warns that if those capabilities continue to improve and eventually converge into systems that can significantly redesign or improve themselves, the pace of AI development would change dramatically. The shift would make progress more difficult for humans to track or fully comprehend.

    “If you end up building an AI system that can build itself, then AI development would speed up very dramatically and probably become harder for people to understand.”

    According to Clark, a self-improving AI would raise major policy challenges and could trigger an unprecedented surge in global economic activity driven primarily by machine intelligence.

    “This would pose a range of significant policy issues and would also likely lead to an unprecedented step change in the economic activity of the world.”

    Clark says governments must take action and demand full transparency from big AI firms now instead of reacting to the issue once the tech has arrived.

    “Put another way, if I had five minutes with a policymaker, I’d basically say to them, ‘Self-improving AI sounds like science fiction, but there’s nothing in the technology that says it’s impossible, and if it happened, it’d be a huge deal, and you should pay attention to it. You should demand transparency from AI companies about exactly what they’re seeing here, and make sure you have third parties you trust who can test out AI systems for these properties.'” 

    Late in December, CEO Sam Altman published a job ad for a Head of Preparedness role at OpenAI, noting that the firm is seeing models capable of finding critical security vulnerabilities.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Anthropic Jack Clark regulation Self-improving AI
    Previous ArticleBillionaire Chamath Palihapitiya Says SpaceX Will Never IPO and Could Reverse Merge Into Tesla
    Next Article China’s Top AI Leaders Say Odds of Beating the US in Artificial Intelligence Race Are Below 20%: Report

    Read More

    Anthropic Tops Solana’s Tokenized Pre-IPO Trading With 100% Surge in Market Cap in Just 30 Days: Token Terminal

    April 16, 2026

    Jensen Huang Admits Missing Multi-Billion Chance To Invest in Anthropic – ‘I’m Not Gonna Make That Same Mistake Again’

    April 15, 2026

    Goldman Sachs Says Firms Are Blowing Past Inference Budgets As KPMG Finds US Companies Spending $178,000,000 in AI This Year

    April 15, 2026

    Fundstrat’s Tom Lee Predicts Ethereum (ETH) Will Hit $62,000, Points to Two Catalysts That Could Trigger 30x Explosion

    April 15, 2026

    a16z’s Ben Horowitz Warns AI Making Identity Fraud Worse, Says $450,000,000,000 in Stolen Stimulus Proves US Needs Crypto

    April 15, 2026

    New Study Warns of AI’s ‘Boiling Frog’ Effect That Starts Just After 10 Minutes of Chatbot Use

    April 14, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.