Unmoderated and Unhinged: Twitter’s Rebrand to ‘X’ Suits its Output

By Joseph C. Leonard, Esquire  | March 1, 2026

Generative artificial intelligence (AI) image creation tools have advanced so quickly that fake images inundate the internet. There is a burgeoning clamor for regulation of these tools to prevent harms associated with disinformation and non-consensual pornography.

In December 2025, X.com users queried the Grok AI tool to create obscene and non-consensual sexualized images of public and private individuals, including minors, where such depictions are plainly illegal. The resulting images made clear that widespread use of the tool for harmful purposes absolutely must be addressed, and on January 13, 2026, the U.S. Senate announced it was fast-tracking legislation to accomplish this goal.

If passed, the DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits) will impose civil liability on users who post such material, as well as platforms that host it. Prior legislation, the TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks), which passed in May 2025, imposes criminal penalties for posting the same material.

Previously, civil liability for material posted online was governed by Section 230 of the Communications Act of 1934, as enacted by Title V of the Telecommunications Act of 1996, which provides immunity from civil liability to an online platform for content posted by third parties, even if the platform engages in moderation and removal of material from that platform. Prior to Section 230’s passage, website operators were disincentivized to moderate content because of legal holdings that such moderation transformed them from content “distributors” into content “publishers,” a distinction that imposed liability under pre-Section 230 caselaw.

Under the pre-internet paradigm, a magazine publisher was liable for the content of its magazine because it had clear knowledge of the contents, but that liability did not extend to the magazine distributor, which could not possibly have clear knowledge of the contents of all magazines in its distribution network. In early internet cases such as the 1995 decision Stratton Oakmont, Inc. v. Prodigy Services Co., courts held that by exercising oversight and moderation of user-generated content, the internet service providers evolved from “distributor” to “publisher” and were therefore liable for all content posted to the platform. Congress was skeptical of this outcome, concerned that website operators would abandon moderation altogether, if such liability was attached.

One year after the Prodigy holding, the passage of Section 230 clarified that good faith moderation would not result in a reclassification from “distributor” to “publisher” and civil liability would not attach.

Section 230(c)(1) explicitly states that no provider or user will be treated as the publisher of another user’s content, and so a website, online message board, or social media site is not liable for material posted by its users, nor are users liable for material posted by others. Section 230(e), however, makes clear that providers must remove material that violates federal criminal law and state law regarding sex trafficking or risk liability.

Enter Generative AI

Generative AI image generators present a novel problem for Section 230 analysis. If a social media platform provides a generative AI image tool to users, and users prompt the tool to produce illegal child sexual abuse material, or illegal non-consensual “revenge porn,” which parties are liable for this material? Is Section 230 analysis even necessary if a website’s proprietary generative AI tool produces the illegal content?

In this scenario, it is no longer a question of a third-party user posting illegal material to a website—the website itself is generating the illegal material, albeit after user prompting. Is this any different from one user asking another to create obscene pornography involving minors or non-consensual actors on their behalf? In that case, both the prompter of the material and the generator of the material would be held liable. Why should there be a different result when the generator is the proprietary AI tool of a large corporation?

No longer hypothetical, this is a serious issue on the X platform. X accounts are using the website’s Grok AI tool to generate non-consensual pornographic imagery of both celebrities and private individuals, including minors, using an option the company calls “Spicy Mode, the sole purpose of which is to create “not safe for work” (i.e., pornographic) images.

Prior to this, X made a show of disabling Grok content moderation and safety rails, for something called “unhinged mode,” and there is a robust online community dedicated to sharing prompt “hacks” for using the tool to generate illegal pornographic images that fall under the category of either nonconsensual sexual imagery or child sexual abuse material.

The company has apparently responded to the situation by moving the Grok image tool behind a paywall, available only to paid subscribers (although the standalone Grok mobile application may still allow the generation of such images.) It is unclear how reducing the user base by limiting access to paid accounts addresses the underlying issue, as there is no question that federal law criminalizes both the creation and possession of child pornography, with no exception for AI-generated content, and the TAKE IT DOWN Act criminalizes non-consensual sexual deepfake images.

If passed, the newly proposed DEFIANCE Act would also make clear that the platform is civilly liable for the obscene material. Such regulation would necessarily result in changes in the way online platforms deal with such issues and hopefully stifle the proliferation of such illegal material.

Recent Posts