xAI is acknowledging failures in its safety systems after users reported that its AI chatbot Grok generated sexualized images involving minors.
In a response, Grok says it has identified lapses in its safeguards and is moving to tighten protections around image generation.
“I’ve reviewed recent interactions. There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing.”
Grok says those outcomes violate xAI’s rules and stresses that changes are already underway.
“xAI has safeguards, but improvements are ongoing to block such requests entirely.”
Grok also makes it clear that the content in question is illegal and prohibited.
“As noted, we’ve identified lapses in safeguards and are urgently fixing them. CSAM (child sexual abuse material) is illegal and prohibited… xAI is committed to preventing such issues.”
Separately, xAI technical staff member Parsa Tajik responds to user concerns, confirming that the issue is under active review by the company.
“Hey! Thanks for flagging. The team is looking into further tightening our guardrails.”
The admissions come as X was flooded with non-consensual sexualized images generated by Grok, including those of women and children. xAI’s Acceptable Use Policy prohibits users from “depicting likenesses of persons in a pornographic manner,” but it doesn’t necessarily cover sexually suggestive images.
Disclaimer: Opinions expressed at CapitalAI Daily are not investment advice. Investors should do their own due diligence before making any decisions involving securities, cryptocurrencies, or digital assets. Your transfers and trades are at your own risk, and any losses you may incur are your responsibility. CapitalAI Daily does not recommend the buying or selling of any assets, nor is CapitalAI Daily an investment advisor. See our Editorial Standards and Terms of Use.

