Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Thursday, April 23
    CapitalAI DailyCapitalAI Daily
    Home»AI & Cybercrime»AI-Generated Child Sexual Abuse Videos Explode 26,362% As 2025 Becomes Worst Year on Record, Warns Internet Watchdog

    AI-Generated Child Sexual Abuse Videos Explode 26,362% As 2025 Becomes Worst Year on Record, Warns Internet Watchdog

    By Henry KanapiJanuary 17, 20263 Mins Read
    Share
    Twitter LinkedIn

    A surge in AI-generated child sexual abuse videos pushed 2025 to the worst year on record for online child sexual abuse material, according to new data from the Internet Watch Foundation (IWF).

    The IWF says its analysts identified a dramatic escalation in photo-realistic AI videos depicting child sexual abuse, warning that rapidly improving tools are enabling criminals to create extreme material at scale with minimal technical skill.

    Analysts at the watchdog say that offenders are exploiting realism and accessibility to produce content that would previously have required organized networks or direct access to victims.

    According to data released on January 16, the IWF discovered 3,440 AI-generated videos of child sexual abuse in 2025, up from just 13 the year before, representing a staggering 26,362% increase. Of those videos, 65% were classified as Category A, the most severe classification under UK law. By comparison, 43% of non-AI child sexual abuse videos identified in 2025 fell into Category A.

    The IWF says Category A material can include penetration, sexual torture and bestiality. A further 30% of AI-generated videos identified in 2025 were classified as Category B, the second most severe category.

    The scale of the problem extended beyond AI content alone. The IWF says it took action on 312,030 reports in 2025, where analysts confirmed the presence of child sexual abuse material, a record total and a 7% increase from 291,730 reports confirmed in 2024. The organization said this marked the highest volume of removals in its 30-year history.

    Kerry Smith, chief executive of the Internet Watch Foundation, says the technology is fundamentally changing the threat landscape.

    “When images and videos of children suffering sexual abuse are distributed online, it makes everyone, especially those children, less safe. Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.”

    She warns that the growing availability of extreme AI-generated content risks emboldening offenders and normalizing sexual violence against children.

    “The frightening rise in extreme Category A videos of AI-generated child sexual abuse shows the kind of things criminals want. And it is dangerous. Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation, and further endanger children both on and offline.”

    The IWF calls on governments and regulators worldwide to force AI companies to embed safety-by-design principles from the outset, warning that current safeguards are insufficient to prevent misuse. The organization says that without intervention, AI tools risk accelerating the spread and severity of child sexual abuse material online.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Abuse AI generated content Internet Watch Foundation News
    Previous ArticleAI Could Raise Wages by 21% and Shrink the Pay Gap, According to New Stanford University Study
    Next Article Ben Affleck Says ChatGPT, Claude and Gemini Scriptwriting Falls Flat, but AI Could Cut Hollywood Costs

    Read More

    Anthropic’s Most Powerful and Dangerous AI Model Mythos Accessed by Hackers on Day One: Report

    April 21, 2026

    Scammers Drain $982,000 From Hong Kong Woman After Luring Her Into Fake AI Crypto Trading Scheme: Bitdefender

    April 20, 2026

    CEO of Nasdaq-Listed AI Firm Allegedly Masterminds $421,000,000 ‘Round-Trip’ Scheme To Defraud Investors: DOJ

    April 18, 2026

    Nvidia’s Jensen Huang Warns China Already Has Everything It Needs To Build AI With Massive Cyber Offensive Capabilities

    April 16, 2026

    ‘I Was Devastated’ – Retired Brooklyn Man Loses $1,600,000 in AI-Powered Scam After Falling for Fake Woman Named Jenny: Report

    April 16, 2026

    Truist-Backed Survey Finds 76% of US Firms Faced Payments Fraud in 2025, But Most Still Aren’t Using AI To Fight Back

    April 14, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.