OpenAI chief executive Sam Altman says teen safety will take precedence over privacy and freedom in the use of artificial intelligence, just as US regulators intensify scrutiny of how chatbot firms handle sensitive data.
In a new public essay, Altman says OpenAI is built on three often-conflicting principles: privacy, freedom, and teen safety.
He says AI conversations deserve the same level of legal protection as medical or legal consultations, but acknowledges exceptions when serious harm is at stake. Altman also says that the firm will advocate for the right to privacy in the use of AI with policymakers.
“It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have.”
But Altman says OpenAI will take a different approach for users below 18 years old.
“We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection. We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience.”
The OpenAI chief executive says the rules extend to content boundaries.
“We will apply different rules to teens using our services. For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
Last week, the U.S. Federal Trade Commission opened a broad inquiry into consumer AI chatbots, issuing compulsory 6(b) orders to seven firms, including OpenAI. The regulator wants to see how companies measure and monitor negative impacts, process user inputs, generate outputs, and use information gleaned from chats, with a special focus on child safety.