OpenAI's PR Challenge: Navigating Sora's Copyright Controversies

Chris Lehane faces an 'impossible mission' as OpenAI grapples with copyright claims and public perception.

OpenAI's VP of Global Policy, Chris Lehane, is on a difficult mission. He aims to convince the world of OpenAI's commitment to democratizing AI. However, the company faces growing scrutiny over its use of copyrighted material, especially with its new Sora video generation tool.

Mark Ellison

By Mark Ellison

October 11, 2025

4 min read

OpenAI's PR Challenge: Navigating Sora's Copyright Controversies

Key Facts

  • Chris Lehane is OpenAI's VP of Global Policy, tasked with managing public perception.
  • OpenAI's Sora video generation tool launched with seemingly copyrighted material.
  • Sora quickly became the #1 app on the U.S. App Store despite copyright concerns.
  • OpenAI initially used an 'opt-out' model for training data, then 'evolved' to 'opt-in'.
  • OpenAI faces lawsuits from major publishers, including the New York Times and Toronto Star.

Why You Care

Ever wonder if the AI tools you love are built on shaky ground? What if the creative AI you use daily is using content without permission? OpenAI, a leading AI developer, is facing a significant challenge. Their Vice President of Global Policy, Chris Lehane, is working to manage public perception. He aims to show OpenAI’s dedication to democratizing artificial intelligence. This comes as the company navigates increasing scrutiny over its practices, particularly concerning copyright. Your trust in AI tools could depend on how these issues are resolved.

What Actually Happened

Chris Lehane, a seasoned crisis manager, has taken on a tough role at OpenAI. As VP of Global Policy, his job is to assure the public about OpenAI’s mission. He needs to convince the world that OpenAI genuinely cares about democratizing AI, according to the announcement. This is happening while the company faces criticism. They are accused of behaving like other large tech companies. OpenAI recently launched Sora, a new video generation tool. This tool seemingly includes copyrighted material, as mentioned in the release. This has led to lawsuits from major publishers. These include the New York Times and the Toronto Star. The company’s initial approach allowed rights holders to opt out of training data. Then, they “evolved” to an opt-in model, the team revealed. This shift occurred after users showed interest in copyrighted images.

Why This Matters to You

This situation directly impacts creators, businesses, and anyone using AI. If you create content, your work might be used without explicit permission. If you use AI tools, you might unknowingly generate content based on copyrighted material. This could lead to legal complications for you or your business. OpenAI’s actions with Sora highlight a broader industry debate. This debate is about fair use and intellectual property in the age of AI.

Key Areas of Impact:

  • Content Creation: AI tools may use your original works without compensation.
  • Legal Risks: Using AI-generated content could expose you to copyright infringement claims.
  • Ethical Concerns: The debate questions the ethical creation of AI.
  • Market Dominance: Large AI companies could assert dominance through controversial means.

“OpenAI initially ‘let’ rights holders opt out of having their work used to train Sora,” the paper states. This is not how copyright use typically works. This approach raises questions about transparency and creator rights. How do you feel about your creative work potentially being used to train AI models without your direct consent? Imagine you’re an artist. An AI tool creates images in your unique style, trained on your portfolio without permission. This is the core issue at hand. It affects how you create and consume digital content.

The Surprising Finding

Here’s the twist: From a business and marketing perspective, launching Sora was “brilliant,” the article explains. Despite the copyright controversies, the invite-only app soared to the top of the App Store. People quickly created digital versions of themselves and popular characters. They even generated images of dead celebrities like Tupac Shakur. This success happened even though the tool seemingly used copyrighted material. This is surprising because one might expect backlash to hurt adoption. However, the user engagement was incredibly high. This challenges the assumption that ethical concerns always outweigh utility for users. It suggests that many users prioritize creative freedom and accessibility. They might not fully consider the underlying data sources.

What Happens Next

OpenAI will likely continue to face legal battles and public relations challenges. Expect ongoing discussions about copyright and fair use in AI creation throughout the next year. For example, future AI models might need more opt-in mechanisms for training data. This could happen by late 2025 or early 2026. Companies may also start offering clearer compensation models for creators. This would address concerns about economic fairness. For you, this means staying informed about AI’s legal landscape. Always verify the source and training data of AI tools you use. Consider supporting platforms that prioritize ethical data sourcing. The industry’s future depends on finding a balance. This balance is between creation and respecting intellectual property. Lehane invoked fair use, calling it the “secret weapon of U.S. tech dominance,” as mentioned in the release. This indicates a strong legal defense strategy from OpenAI.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice