Two high-profile AI researchers are issuing dire warnings about the progress of artificial intelligence after leaving their respective companies.
In a new post on X, Anthropic AI safety engineer Mrinank Sharma says his last day at the firm was on February 9th, noting that it was time for him to move on.
Before leaving, he warns that the world is facing a cocktail of existential threats.
“I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences. Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”
At the same time, artificial intelligence researcher Zoë Hitzig left OpenAI, saying that she left the private company on the same day it tested ads in ChatGPT.
“OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the tidal forces pushing them to abuse it?”
In an opinion piece published on The New York Times, Hitzig says she’s deeply concerned about how OpenAI’s massive trove of human data might be used for profit.
“For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda. Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
In January, OpenAI CEO Sam Altman said that ads on ChatGPT will be governed by explicit guardrails that protect user trust.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

