Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Friday, March 6
    CapitalAI DailyCapitalAI Daily
    Home»AI & Cybercrime»AI-Generated Child Sexual Abuse Videos Explode 26,362% As 2025 Becomes Worst Year on Record, Warns Internet Watchdog

    AI-Generated Child Sexual Abuse Videos Explode 26,362% As 2025 Becomes Worst Year on Record, Warns Internet Watchdog

    By Henry KanapiJanuary 17, 20263 Mins Read
    Share
    Twitter LinkedIn

    A surge in AI-generated child sexual abuse videos pushed 2025 to the worst year on record for online child sexual abuse material, according to new data from the Internet Watch Foundation (IWF).

    The IWF says its analysts identified a dramatic escalation in photo-realistic AI videos depicting child sexual abuse, warning that rapidly improving tools are enabling criminals to create extreme material at scale with minimal technical skill.

    Analysts at the watchdog say that offenders are exploiting realism and accessibility to produce content that would previously have required organized networks or direct access to victims.

    According to data released on January 16, the IWF discovered 3,440 AI-generated videos of child sexual abuse in 2025, up from just 13 the year before, representing a staggering 26,362% increase. Of those videos, 65% were classified as Category A, the most severe classification under UK law. By comparison, 43% of non-AI child sexual abuse videos identified in 2025 fell into Category A.

    The IWF says Category A material can include penetration, sexual torture and bestiality. A further 30% of AI-generated videos identified in 2025 were classified as Category B, the second most severe category.

    The scale of the problem extended beyond AI content alone. The IWF says it took action on 312,030 reports in 2025, where analysts confirmed the presence of child sexual abuse material, a record total and a 7% increase from 291,730 reports confirmed in 2024. The organization said this marked the highest volume of removals in its 30-year history.

    Kerry Smith, chief executive of the Internet Watch Foundation, says the technology is fundamentally changing the threat landscape.

    “When images and videos of children suffering sexual abuse are distributed online, it makes everyone, especially those children, less safe. Our analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.”

    She warns that the growing availability of extreme AI-generated content risks emboldening offenders and normalizing sexual violence against children.

    “The frightening rise in extreme Category A videos of AI-generated child sexual abuse shows the kind of things criminals want. And it is dangerous. Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation, and further endanger children both on and offline.”

    The IWF calls on governments and regulators worldwide to force AI companies to embed safety-by-design principles from the outset, warning that current safeguards are insufficient to prevent misuse. The organization says that without intervention, AI tools risk accelerating the spread and severity of child sexual abuse material online.

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    Abuse AI generated content Internet Watch Foundation News
    Previous ArticleAI Could Raise Wages by 21% and Shrink the Pay Gap, According to New Stanford University Study
    Next Article Ben Affleck Says ChatGPT, Claude and Gemini Scriptwriting Falls Flat, but AI Could Cut Hollywood Costs

    Read More

    Majority of Americans Have Experienced Financial Scam or Fraud As AI Makes Scammers ‘More Sophisticated,’ Warns Bankrate

    March 5, 2026

    Scammers Draining Average of $1,020 From Americans in Tax Schemes, According to New McAfee Research

    March 4, 2026

    Arizona Attorney General Kris Mayes Warns of Growing AI Scam Threat After Recovering $4,000,000

    March 3, 2026

    U.S. Postal Inspection Service Warns AI Scams Making Old Cons ‘Feel Legitimate’

    March 2, 2026

    Scammers Draining Crypto Using AI, Pose as FBI in Fake Recovery Scheme, According to OpenAI

    February 27, 2026

    OpenAI Says Chinese-Linked Influence Campaign Tried To Use ChatGPT To Target Japanese Prime Minister

    February 26, 2026
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.