The one big thing everyone’s missing about the Grok scandal
By ignoring the Grok uproar, the Trump Administration lets its façade of concern for children’s online safety crumble.
While the left and right rarely agree, a moment of a near unanimous alignment took place in 2025 with the federal passage of the “Take it Down” Act. Even in a time of hyper-partisanship, Republicans and Democrats united to say: AI shouldn’t be used to create and share non-consensual sexual images. That was simply a red line we wouldn’t cross.
Which is why the beginning of 2026–and the Trump Administration’s failure to respond to a growing uproar around the generative AI platform Grok–has felt like a point of no return for anyone concerned about AI safety, especially for children.
In late December, Elon Musk’s xAI + Grok Imagine made it as easy to create and distribute non-consensual porn and child sexual abuse materials (CSAM) as any internet cat meme.
While concerns about “nudify” apps and deepfakes have been growing for years, xAI’s recent special contribution was to enable “X” users, with a few easy clicks, to edit pictures of real people in highly sexualized ways and then widely distribute the resulting images via X’s (formerly Twitter’s) social platform.
Over the past few days, xAI finally began putting some limits in place so that it’s more difficult (but not impossible) to create these images. But as outrage grew, Musk outright mocked complaints and denied what was happening, all while his company called verified reports “legacy media lies.”
It would be easy to overlook why this moment is so pivotal, given that deepfakes and “nudify” apps have been around for at least two years. But what’s happening on xAI since December is categorically different, given the ability to both create and widely distribute harmful sexual content based on images of real people.
What’s also different is that in the face of criminal activity, the Trump Administration has failed to respond, even as other nations have acted quickly and decisively. You might ask: “Is that really a surprise, given Trump’s close relationship with Elon Musk and his companies?” But here’s what people are missing: It is perhaps not a surprise, but this moment represents a distinct turning point for the Trump Administration.
Until now, in response to a MAGA base that has zero tolerance for child sexual abuse, Trump has walked both sides of the line. He’s called for unfettered AI development, while purporting to stand with his base’s calls to protect children from generative AI harms (like signing into law the “Take it Down” Act).
Now, in one of the first public examples of the incompatibility of holding both positions, the Trump Administration’s silence has made clear that when push comes to shove, it simply won’t stand up to Big Tech. Children’s online safety was only on the table when it was convenient for base-building.
In contrast, consider how quickly other countries responded to the illegal images on Grok. In the early days of the year, India, France, and the UK all quickly launched investigations into xAI; other countries banned Grok outright.
In the US, even three weeks into 2026, there has been no action from Trump’s Federal Trade Commission, Federal Communications Commission, nor the Department of Justice calling on xAI to pause or eliminate this feature. The primary response has been from Trump’s State Department, which perversely came to xAI’s defense against foreign countries considering censuring the platform.
Stepping back for a moment, we all know this is a difficult moment in the US. There is an urgent need for civic participation to ensure that the things we hold most dear as a country remain. And while I remain stubbornly optimistic about US democracy, I can’t help but feel pessimistic about our children growing up in a world of generative AI.
How can we possibly expect AI companies to protect children, for example, from over-reliance on chatbot relationships, if we can’t even prevent those same companies from enabling the easy creation and distribution of child sexual abuse material using pictures of our own children?
Already, the complexity of the systems and a web of integrated apps allows companies like xAI to easily shirk responsibility. Take, for example, Grok’s January 14th claim that it disabled the functionality allowing X users to edit and share non-consensual images directly on X, by prompting @Grok with text commands.
In other words, within the X app, Grok said that users could no longer view an image of a real person, prompt @Grok with inappropriate commands like “put her in a see-through bikini” and then publicly post the AI-generated image.
What this fails to explain, and as anyone with a free Grok app account can easily prove, is that any Grok app user can capture a digital picture, upload it into the Grok app, use text prompts to create a sexualized image, and then with a single click, pop that image right back into X (I was able to verify this myself on January 19, 2026, five days after Grok claimed the problem had been resolved.)
While X and Grok Imagine are separate apps, their seamless integration makes Grok’s claims of fixing the problem on the X platform next to meaningless. [It’s important to note, however, that the Grok app does appear to have instituted more content controls for prompts related to children, even as it continues to sexualize pictures of women.]
But the technical details are unimportant in light of the bigger picture. This was a known safety concern, advocates’ clear calls for prevention went unheeded, and now there is no federal accountability for the company enabling the harms.
This leaves in question a fundamental agreement that seemed until recently to unite the right and the left: When it comes to generative AI safety, are there any red lines left? And if so, who will call on the Trump Administration to enforce them?
In my next post, I’ll share a timeline detailing the events of the past month. The record needs to show: What’s happening now is different. Even in the face of criminal activity, Republican leadership has failed to protect women and children from generative AI harms. More to come this week.
Before I go, I want to share this excellent post from my former organization, ParentsTogether Action. Check it out for concrete tips about how parents can talk to their kids about things like deepfakes. And find out what we can all do to take action.



