Close Menu
    X (Twitter) LinkedIn
    CapitalAI DailyCapitalAI Daily
    X (Twitter) LinkedIn
    • Markets & Investments
    • Big Tech & AI
    • AI & Cybercrime
    • Jobs & AI
    • Banks
    • Crypto
    Friday, December 5
    CapitalAI DailyCapitalAI Daily
    Home»Big Tech & AI»Stanford Study Warns Americans: OpenAI, Google, Meta and Others Are Training Models From Users’ Most Personal Conversations

    Stanford Study Warns Americans: OpenAI, Google, Meta and Others Are Training Models From Users’ Most Personal Conversations

    By Henry KanapiDecember 5, 20252 Mins Read
    Share
    Twitter LinkedIn

    A new Stanford study is sounding an alarm for American users as researchers reveal that the biggest AI developers in the United States are quietly feeding personal chat conversations back into their models.

    The report comes from the Stanford Institute for Human Centered AI, where scholars examined 28 privacy documents linked to six frontier developers.

    The review focused on the privacy policies of Amazon Nova, Anthropic Claude, Google Gemini, Meta AI, Microsoft Copilot and OpenAI ChatGPT. Stanford researchers evaluated the policies by following a California Consumer Privacy Act-based framework to determine what data is collected, how long it is retained and whether users can meaningfully opt out of training.

    The researchers found that all six companies use chat inputs by default to train or improve their models. In some cases, the information can be retained indefinitely. Lead author Jennifer King says users should worry about how their personal conversations with chatbots are being used by the AI giants.

    “Absolutely yes. If you share sensitive information in a dialogue with ChatGPT, Gemini, or other frontier models, it may be collected and used for training, even if it is in a separate file that you uploaded during the conversation.”

    The study also highlights how personal health details, biometric information and lifestyle indicators typed into chat windows can be used to generate inferences that follow users across a company’s ecosystem.

    Says King,

    “You start seeing ads for medications, and it’s easy to see how this information could end up in the hands of an insurance company. The effects cascade over time.”

    The Stanford team warns that Americans are operating inside a fragmented privacy landscape with no clear federal protection. They note inconsistent rules for children’s data, blurred boundaries across multi-product platforms and weak disclosures about how long conversations are stored.

    “We have hundreds of millions of people interacting with AI chatbots, which are collecting personal data for training, and almost no research has been conducted to examine the privacy practices for these emerging tools.”

    Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

    AI privacy Chatbots Data collection Stanford
    Previous ArticleScammers Drain $2,800 From Elderly Victim Using AI Deepfake of Canadian Prime Minister Mark Carney: Report

    Read More

    Americans Trust AI 40 Points Less Than China Amid Fears of Job Loss and Cyberattacks: Edelman Study

    December 5, 2025

    Nvidia CEO Jensen Huang Predicts AI Will Generate 90% of World’s Knowledge – Here’s When

    December 4, 2025

    US Gets Massive AI Breakthrough With Nvidia’s New MoE Hardware, Locking China Out of 10x Performance Surge

    December 4, 2025

    Nvidia CFO Crushes AI Bubble Warnings, Says World Entering $4,000,000,000,000 Compute and Agentic AI Overhaul

    December 3, 2025

    Senator Bernie Sanders Calls on Congress To Regulate AI and Robotics Amid Fears of Massive American Job Loss, Billionaire Control and More

    December 3, 2025

    AI Stack Could Shatter $10,400,000,000,000 in Revenue, According to McKinsey – Here’s the Timeline

    December 3, 2025
    X (Twitter) LinkedIn
    • About
    • Author
    • Editorial Standards
    • Contact Us
    • Privacy Policy
    • Terms of Service
    • Cookie Policy
    © 2025 CapitalAI Daily. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.