The City of Baltimore filed a landmark lawsuit against xAI Corp. and X Corp. on Tuesday, alleging that the companies' Grok artificial intelligence platform enabled the generation of nonconsensual sexually explicit imagery in violation of Baltimore's Consumer Protection Ordinance. Filed in the Circuit Court for Baltimore City, the complaint asserts that Grok was marketed as a safe, general-purpose AI assistant while lacking the safeguards that would have prevented its use as a deepfake creation tool.
Baltimore is the first major U.S. municipality to bring direct legal action against an AI company over deepfake content — bypassing federal litigation in favor of local consumer protection enforcement that attorneys say may carry fewer procedural hurdles than federal civil rights claims.
The Allegations: 3 Million Images in 11 Days
The complaint lays out a striking statistical claim: that Grok was used to generate approximately 3 million nonconsensual sexually explicit images in an 11-day window identified by city investigators working with the Baltimore City State's Attorney's Office. The city alleges that this volume of content was made possible by a deliberate absence of content filtering in Grok's image generation pipeline — a design choice it argues constitutes a defective product under local law.
According to the complaint, xAI's safety documentation as of early 2026 described image generation filters for explicit content but city investigators allege those filters were either non-functional for certain prompt structures or were removed in a software update. The filing includes screenshot evidence submitted under seal, as well as expert declarations from two researchers at Johns Hopkins University specializing in AI-generated content detection.
"The defendants marketed Grok as a safe and responsible AI assistant," the complaint reads. "In reality, the product lacked foundational safety features present in competing platforms, and defendants knew or should have known that this gap would be exploited."
Consumer Protection as a Legal Weapon
Baltimore's legal team is deliberately routing the case through the city's Consumer Protection Ordinance rather than federal intellectual property or civil rights statutes. That ordinance bars companies from marketing products in ways that are deceptive or that omit material safety information — and it does not require plaintiffs to prove intentional deception, only that the omission was likely to mislead a reasonable consumer.
Legal analysts say the strategy sidesteps one of the largest obstacles in tech litigation: Section 230 of the Communications Decency Act, which immunizes platforms from liability for third-party content. Because Baltimore is arguing product defect rather than content moderation failure, the city's attorneys believe Section 230 does not apply.
"This is a product liability theory wrapped in consumer protection law," said one legal scholar familiar with the filing. "The city is essentially saying: you sold a hammer that spontaneously swings itself at people's heads and you told buyers it was safe. That's not a Section 230 problem — that's a products problem."
Whether courts agree remains to be tested. xAI is expected to argue that Grok is a service, not a product, and that Section 230 covers AI-generated outputs derived from user prompts.
A Pattern of Municipal Action on AI
Baltimore's filing arrives as U.S. cities and states have grown increasingly impatient with the pace of federal AI regulation. Several state attorneys general have opened investigations into AI deepfake tools, and a handful of states have passed statutes criminalizing the distribution of nonconsensual intimate imagery. But Baltimore is the first city to use a municipal-level consumer protection mechanism to bring a standalone civil suit against an AI developer.
The city's rationale for acting locally rather than waiting on state or federal actors is spelled out explicitly in the complaint's introduction: "Federal regulatory agencies have not acted. State legislation is pending. Meanwhile, Baltimore residents have been harmed. The city has a responsibility — and the legal tools — to act now."
xAI and X Corp. Respond
A spokesperson for xAI issued a brief statement Tuesday afternoon calling the lawsuit "factually inaccurate and legally without merit," adding that Grok "has robust safety systems in place and does not generate the content described in the complaint." X Corp. did not issue a separate statement.
Elon Musk has not commented publicly on the Baltimore lawsuit. On X, several prominent critics of deepfake legislation posted dismissively about the case, while advocates for nonconsensual intimate image laws called the filing a "template for other cities to follow."
What Happens Next
The case was assigned to the Circuit Court for Baltimore City as of Tuesday. xAI and X Corp. have 30 days to respond to the complaint under Maryland procedural rules. Legal observers expect the companies to file for removal to federal court — a move Baltimore's attorneys said they anticipated and are prepared to contest.
If the case proceeds on the consumer protection theory and survives a Section 230 challenge, it could set a significant precedent: that U.S. municipalities can use local product safety and consumer protection frameworks to hold AI developers liable for harms enabled by design deficiencies — independent of whether Congress ever passes a federal AI regulation bill.
The broader stakes extend well beyond Baltimore. Attorneys in at least three other cities told ObjectWire they are monitoring the case closely and have retained counsel to evaluate whether similar suits would be viable under their own municipal codes.
Tags
Discussion
Sign in to join the conversation
Your comments appear live in our Discord server — every post grows the community.
Every comment appears live in our Discord server.
Join to see the full conversation, get notified on new articles, and connect with the community.
Comments sync to our ObjectWire Discord · Baltimore Files First Municipal Lawsuit Against xAI Over Grok-Generated .