Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Wednesday, March 4
    CapitalAI DailyCapitalAI Daily
    Home»AI & Cybercrime»Prediction Market Reveals US AI Regulation Almost Certain To Miss 2025 Despite Concerns on Model Safety

    Prediction Market Reveals US AI Regulation Almost Certain To Miss 2025 Despite Concerns on Model Safety

    By Henry KanapiNovember 30, 20252 Mins Read
    Share
    Twitter LinkedIn

    New data from prediction markets signals that the US government will fall short of enacting AI regulation this year.

    Data from Kalshi shows that the odds for AI regulation becoming a law this year have plummeted to 3% after climbing to as high as 84.6% in May of this year.

    Amid the falling odds, traders can buy a “Yes” contract for as low as $0.06, while those taking the other side of the bet will have to shell out $0.96 per contract. So far, the market has accumulated $93,616 in bets.

    “If a bill becomes law regulating large language models (for example, banning them, limiting how they can be trained, or limiting how they can be used) by Dec 31, 2025, then the market resolves to Yes. Outcome verified from Library of Congress.”

    Source: Kalshi

    In May, US President Donald Trump signed into law the Take It Down Act, which criminalizes the non-consensual sharing of intimate images and deepfakes, and requires online platforms to establish a notice-and-takedown process for such content.

    But the Kalshi market did not resolve to “Yes” upon the signing of the law, as the bill only covers the publication of digital forgeries.

    “This does not regulate the creation, training, use, or export of large language models. A bill banning the creation of such images using large language models would be an example of a bill that would resolve the market to Yes.”

    The news comes as AI experts ring the alarm on model safety issues. In a recent interview, OpenAI co-founder Ilya Sutskever warned that AI models will become more visibly powerful, triggering governments and the public to take action.

    In September, the U.S. Federal Trade Commission (FTC) opened a probe over the handling of data involving the use of AI chatbots. The regulator wanted to see how companies measure and monitor negative impacts, process user inputs, generate outputs, and use information gleaned from chats, with a special focus on child safety.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI models AI regulation AI safety Kalshi
    Previous ArticleAI Drove Record-Setting $11,800,000,000 in US Black Friday Sales, Says Adobe: Report
    Next Article Elon Musk Names Nvidia and One xAI Competitor As Top Stock Picks, Predicts AI Will Usher Universal High Income

    Read More

    Arizona Attorney General Kris Mayes Warns of Growing AI Scam Threat After Recovering $4,000,000

    March 3, 2026

    U.S. Postal Inspection Service Warns AI Scams Making Old Cons ‘Feel Legitimate’

    March 2, 2026

    Scammers Draining Crypto Using AI, Pose as FBI in Fake Recovery Scheme, According to OpenAI

    February 27, 2026

    OpenAI Says Chinese-Linked Influence Campaign Tried To Use ChatGPT To Target Japanese Prime Minister

    February 26, 2026

    AI-Enabled Crypto Fraud Surges 500% As Illicit Flows Shatter $158,000,000,000: TRM Labs

    February 24, 2026

    Sam Altman, Dario Amodei Sound Alarm About AI in Hands of Dictators and Totalitarian Regimes

    February 20, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.