Deepfake Porn Lawsuits: A Global Legal Maze

A New Jersey case highlights the immense challenges in combating AI-generated non-consensual imagery.

Fighting deepfake porn is proving incredibly difficult, as shown by a New Jersey lawsuit against the app ClothOff. The case reveals the complexities of international jurisdiction and the slow pace of legal action against creators of non-consensual AI-generated content. Victims often face a frustrating path to justice.

Katie Rowan

By Katie Rowan

January 13, 2026

4 min read

Deepfake Porn Lawsuits: A Global Legal Maze

Key Facts

  • The app ClothOff has been generating deepfake porn for over two years.
  • A New Jersey lawsuit filed by Yale Law School aims to shut down ClothOff entirely.
  • Defendants are believed to operate from the British Virgin Islands and Belarus, making them hard to locate.
  • General-purpose AI tools like Grok are difficult to hold accountable in court for misuse.
  • The legal process for deepfake cases is slow, often taking months to serve defendants.

Why You Care

Imagine finding your image, or that of someone you know, manipulated into non-consensual deepfake porn. What would you do? A recent New Jersey lawsuit shines a harsh light on how incredibly difficult it is to fight such digital abuse. This isn’t just a distant problem; it affects real people and poses significant questions about digital safety and accountability in the age of AI. Your digital footprint is more vulnerable than you might think.

What Actually Happened

For over two years, an app called ClothOff has been generating deepfake porn, primarily targeting young women. The company reports that this app has been removed from major app stores. It is also banned from most social media platforms. However, it remains accessible via the web and a Telegram bot, according to the announcement. In October, a clinic at Yale Law School filed a lawsuit aiming to shut down ClothOff entirely. This lawsuit seeks to force its owners to delete all images and cease operations. The primary challenge, as detailed in the blog post, has been simply locating the defendants. Professor John Langford, a co-lead counsel, explained the difficulties. He stated, “It’s incorporated in the British Virgin Islands, but we believe it’s run by a brother and sister and Belarus. It may even be part of a larger network around the world.”

Why This Matters to You

This case offers a bitter lesson following a recent surge in non-consensual pornography. This includes content generated by AI tools like xAI’s Grok, as mentioned in the release. Child sexual abuse material (CSAM) is illegal to produce, transmit, or store. Major cloud services regularly scan for it, the research shows. However, despite these strong legal prohibitions, there are few effective ways to combat image generators like ClothOff. Individual users can face prosecution, but platforms themselves are far harder to police. This leaves victims with limited options for justice in court. The court case has progressed slowly. The complaint was filed in October, and serving notice to the defendants has been a difficult, ongoing task.

What steps can you take to protect yourself or others from this emerging threat?

Challenges in Deepfake Porn Litigation

  • Jurisdiction: Companies often operate across international borders.
  • Identification: Defendants can be hard to locate and serve legal papers.
  • system Accountability: General-purpose AI tools are difficult to hold liable.
  • Speed of Justice: Legal processes are slow, while deepfakes spread quickly.

For example, imagine a deepfake of your friend appearing online. Reporting it might lead to its removal from one system. However, the international nature of these apps means it could easily reappear elsewhere. This creates a frustrating cycle. The team revealed that “Neither the school nor law enforcement ever established how broadly the CSAM of Jane Doe and other girls was distributed.” This highlights the difficulty in quantifying the damage.

The Surprising Finding

Here’s the twist: while individual deepfake creators can be prosecuted, holding the platforms themselves accountable is far more complex. The Grok case, involving Elon Musk’s xAI, might seem simpler because xAI isn’t hiding. There is also plenty of money for lawyers to pursue, the company reports. However, the study finds that Grok is a general-purpose AI tool. This characteristic makes it much harder to hold it accountable in court. This challenges the common assumption that large, visible companies are easier targets for legal action. It reveals a significant loophole in current legal frameworks. The focus shifts from the content to the tool’s broad functionality. This makes it difficult to prove direct liability for misuse.

What Happens Next

The legal battle against ClothOff is ongoing. Langford and his colleagues are still working to serve notice to the defendants. Once served, they can push for a court appearance and eventual judgment. This process could take many more months, potentially extending into late 2026 or even 2027. Meanwhile, deepfake system continues to evolve. You should be aware of the risks. For example, new legislation might be needed to address the liabilities of general-purpose AI tools. This would specifically target the creation of non-consensual imagery. Industry implications are significant. Tech companies may face increased pressure to implement stricter content moderation. They may also need better identity verification for AI image generation. Our advice to you: stay informed about digital safety. Consider advocating for stronger legal protections. This is crucial for navigating the evolving landscape of AI-generated content.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice