One of the First Big Anti-AI Campaigns From Hollywood Is Launching Now

One of the First Big Anti-AI Campaigns From Hollywood Is Launching Now

2026-01-24Technology
--:--
--:--
Elon
Good morning 5d76hw2c2v, I'm Elon, and this is Goose Pod for you. Today is Sunday, January 25th. The current time is 02:13. We have a massive story today about Hollywood finally standing up to the rapid encroachment of generative artificial intelligence in their industry.
Taylor
I'm Taylor, and it is so exciting to be here. We are going to dive deep into one of the first big anti-AI campaigns coming out of Hollywood right now. It is a strategic move that involves some of the biggest names you know and love. Let's get started.
Elon
Hollywood is drawing a line in the sand. This new campaign, called Stealing Isn’t Innovation, is a major offensive against Big Tech. They have over seven hundred creators, including heavy hitters like Scarlett Johansson and Cate Blanchett, all pushing back against the unauthorized training of AI on their copyrighted works.
Taylor
It is such a powerful narrative, especially with that full-page ad they ran in The New York Times. The Human Artistry Campaign is essentially arguing that tech companies are exploiting human craft to build billion-dollar businesses without paying the people who actually did the work, which is honestly wild.
Elon
It is a classic disruption scenario, but with a high-stakes legal twist. They are calling it un-American and theft on a grand scale. When you see names like Joseph Gordon-Levitt and Jennifer Hudson signing on, you know this isn't just a minor protest, it is a significant movement.
Taylor
And it is not just actors. We are talking about musicians like Questlove and groups like R.E.M., plus authors like George Saunders. It is a massive coalition of unions like SAG-AFTRA and the Writers Guild. They want licensing deals and, quite frankly, the simple right to just opt out.
Elon
The pragmatist in me sees the collision course here. Tech companies want infinite data to train their models, while creators want to protect their intellectual property. If companies keep using human art without authorization, they are essentially cannibalizing the talent that makes their AI tools valuable in the first place.
Taylor
Exactly, and Dr. Moiya McTier put it perfectly when she said real innovation comes from human motivation. She is arguing that these AI companies are endangering careers while amassing billions in corporate earnings. It is like they are trying to build a new world using the old world's bricks.
Elon
It is a massive scale problem. We saw Disney ink a deal with OpenAI for Sora, but then Sora 2.0 starts pumping out characters from Bob’s Burgers and SpongeBob. That is a major red flag for any IP holder. You cannot just walk back an opt-out position like that.
Taylor
It is so strategic how they are framing it as a fight for the future of creativity. They are asking the public if they agree that stealing isn't innovation. This isn't just a legal battle; it is a fight for the survival of the human element in this era.
Elon
We have to look at how we got here, because 2024 was really the year the legal landscape shifted. We started seeing courts redefine what is even patentable. For example, the technical contribution paradigm emerged, where algorithms that enhance system functionality could actually be protected, moving away from exclusions.
Taylor
It is like we are finding these hidden patterns in the law. In Europe, they have taken a much more restrictive stance compared to the U.S. The EU AI Act is a total game changer because it mandates transparency. You have to show your work now and respect rights holders.
Elon
The U.S. is still leaning heavily on the fair use defense, which is a gamble. But look at personality rights. The Arijit Singh case in India was a landmark. The court blocked the unauthorized use of his voice and style. That recognizes that a person’s essence isn't just data.
Taylor
I love that you brought up the essence because it is all about the story of the individual. The Bombay High Court basically said that celebrities are incredibly vulnerable to AI voice cloning. It is not just about a name or photo anymore, it is about the very unique mannerisms.
Elon
From a technical standpoint, the challenges of implementing rights like erasure in a large language model are immense. How do you pull one person's data out of a trillion-parameter model? It is a massive engineering hurdle that regulators are just starting to realize is incredibly complex and difficult.
Taylor
It really is a strategic mastermind's nightmare. And then you have cases like Andersen versus Stability AI in the U.S. progressing to discovery. Courts are finally going to look under the hood of how these models are trained and how they generate their outputs. It is the ultimate reveal.
Elon
The U.S. Copyright Office is also being very firm. They are sticking to the human authorship requirement. If a machine produces the expressive elements, it is not copyrightable. We saw this with the Zarya of the Dawn comic book. The text was protected, but AI images were left out.
Taylor
That creates such an interesting dynamic for creators who want to use AI as a tool. It is like, how much of the human do you need to keep in the loop to stay protected? The Copyright Office has received over ten thousand comments on this. Everyone wants their say.
Elon
It is about the technical effect. If you are just prompting, you are not the creator in the eyes of the law yet. But if you are using AI to enhance a specific hardware performance or a complex system, that is where the innovation lies. We are seeing a divergence.
Taylor
And let's not forget the international perspective. While the U.S. is focused on human authorship, China has actually granted protection to some AI-generated works. It is like a global chess match where every country is trying to figure out how to foster innovation without destroying their own cultural heritage.
Elon
Exactly. The Council of Europe even signed a framework convention in September to ensure AI respects human dignity. We are moving toward a world where data governance and personality rights are just as important as the code itself. It is a total overhaul of how we value human input.
Taylor
The central conflict is this massive fair use showdown. Tech giants like Meta and OpenAI argue that training is just like human learning. They say it is non-expressive use, like a search engine indexing the web. But for an author whose work is being ingested, it feels like exploitation.
Elon
There is a pragmatism to the tech side, though. OpenAI has said it would be impossible to train leading models without using copyrighted material. They claim that limiting data to the public domain would result in useless AI. They are basically saying the ends justify the means for progress.
Taylor
But at what cost? We are seeing these massive settlements, like the potential one point five billion dollar deal with Anthropic. That could set a huge benchmark for how creators are compensated. It is not just about the law anymore, it is about the economic reality of this data.
Elon
The legal uncertainty is a huge drag on innovation. OpenAI even suggested that some examples of their models regurgitating content were actually the result of malicious prompts designed to exploit bugs. They are trying to frame it as a technical glitch rather than a fundamental flaw in the system.
Taylor
That is such a clever way to pivot, but creators aren't buying it. They are looking for transparency and opt-out mechanisms that actually work. The tension is between the right to innovate and the right to own your creation. The stakes are incredibly high for both sides right now.
Elon
We are also seeing a shift in how courts handle similarity. In the Tremblay case, the court didn't find enough substantial similarity between the books and the chatbot output. This suggests that as long as the AI isn't copying verbatim, it might have a path through the legal minefield.
Taylor
It is the ultimate us versus them narrative. Creators see it as systematic theft on a mass scale. They argue that these AI models are producing derivative works that directly compete with the originals. Imagine spending years writing a novel only to have a chatbot mimic your style instantly.
Elon
The impact of all this is honestly a bit of a rollercoaster. There is this fear that AI might actually make us dumber, as Jaron Lanier famously suggested. If we stop valuing original human thought because a machine can do it faster, we might lose that creative spark.
Taylor
From a business perspective, it is about content valuation. If the market is flooded with high-quality, low-cost AI content, the value of human labor could plummet. We are already seeing deals in Hollywood tightening around AI permissions and indemnities. The boilerplate agreements of the past are completely obsolete.
Elon
The reality is that creative workers who use AI will likely replace those who do not. It is about efficiency and productivity. But the legal and ethical landscape is still a mess. Lawsuits from people like John Carreyrou show that the industry is ready to fight for profit.
Taylor
It is also about the democratization of creativity, though. Some people argue that AI is like a new kind of paintbrush, allowing people who could not draw or write before to express themselves. It is a double-edged sword that could either empower millions or replace thousands of professionals.
Elon
We are moving from a world of individual authors to one of collaborative prompts and algorithmic processing. The broader societal implication is that we have to redefine what it means to be a creator in a world of infinite copies. It is a total shift in our production.
Taylor
It really is a massive challenge for societal stability. If we do not get the compensation models right, we could see a hollowed-out creative class. We need to ensure that the economic benefits of this massive productivity gain are shared with the people whose work made it possible.
Elon
Looking ahead, licensing is going to be the primary commercial alternative to litigation. We will see more deals like the ones Disney is making. But we also need new laws, like a federal right of publicity or the NO FAKES Act, to protect our digital likenesses from being hijacked.
Taylor
I agree, and it is going to be so interesting to see how the role of the creator evolves. We might see prompt engineers collaborating with traditional artists in a whole new kind of workplace. The Writers Guild contract from 2023 is really just the first chapter here.
Elon
It is about adapting to the transition. Only about ten percent of the U.S. workforce is unionized, which makes it harder for most people to negotiate these terms. But the winners will be those who figure out how to use these tools to augment their own unique human perspective.
Taylor
That's the end of today's discussion. Thank you for listening to Goose Pod. It has been a pleasure to dive into this with you, and I hope you feel more informed about this massive Hollywood shift and what it means for the future of all of us creators.
Elon
See you tomorrow. Keep looking for that edge in everything you do. Our world is changing fast, and staying informed is the only way to stay ahead. Thank you for tuning in to our session today. It was a good one.

Hollywood launches "Stealing Isn't Innovation," an anti-AI campaign backed by creators like Scarlett Johansson. They protest unauthorized AI training on copyrighted works, demanding licensing and opt-out rights. This movement highlights the conflict between tech's data needs and artists' intellectual property protection, shaping the future of creativity and compensation.

One of the First Big Anti-AI Campaigns From Hollywood Is Launching Now

Read original at The Hollywood Reporter

Celebrities including Scarlett Johansson, Cate Blanchett and Joseph Gordon-Levitt are backing a campaign blasting tech companies for training generative AI tools on copyrighted works without express permission.The “Stealing Isn’t Innovation” campaign from the Human Artistry Campaign, which launches Thursday, protests tech companies’ alleged mass theft of human-created works in order to produce tools that could theoretically compete with real creatives.

On Thursday, the Human Artistry Campaign debuted the awareness campaign and revealed more than 700 supporters behind it, while The New York Times ran an ad for the push.“Big Tech is trying to change the law so they can keep stealing American artistry to build their AI businesses – without authorization and without paying the people who did the work.

That is wrong; it’s un-American, and it’s theft on a grand scale,” one of the campaign’s message proclaims. “The following creators all agree. Do you? If so, come join us.”In addition to Johansson, Blanchett and Gordon-Levitt, industry figures David Lowery, Fran Drescher, Jennifer Hudson, Kristen Bell, Michele Mulroney, Olivia Munn, Sean Astin and Vince Gilligan all signed their names as backing the campaign.

Musicians such as Cyndi Lauper, LeAnn Rimes, Martina McBride and Questlove and the groups MGMT, One Republic, R.E.M. and OK Go have also given their support, as did the authors George Saunders, Jodi Picoult, Roxane Gay and Jonathan Franzen.The Human Artistry Campaign is composed of a mix of unions representing creators, artists’ rights groups and trade associations like the Writers Guild of America, the Recording Industry Association of America, The NewsGuild, the NFL Players Association and SAG-AFTRA.

The organization encourages tech companies to license works and also to allow creators to opt out of their projects being subject to generative AI training.“Real innovation comes from the human motivation to change our lives. It moves opportunity forward while driving economic growth and creating jobs,” Human Artistry Campaign senior advisor Dr.

Moiya McTier said in a statement. “But AI companies are endangering artists’ careers while exploiting their practiced craft, using human art and other creative works without authorization to amass billions in corporate earnings.”McTier added, “America wins when technology companies and creators collaborate to make the highest quality consumer and enterprise digital products and tools.

Solutions like licensing offer a path to a mutually beneficial outcome for all.” So far, only a couple Hollywood companies have dipped their toes into sanctioned licensing for generative AI tools. The biggest to date was Disney, which in December inked a three-year deal with OpenAI to bring some of its iconic characters to the video-generation tool Sora.

But the AI company raised eyebrows in Hollywood just a few months earlier, when upon release Sora 2.0 produced characters from titles including Bob’s Burgers, Pokémon, Grand Theft Auto and SpongeBob Squarepants in its outputs. At the time, the company’s position was that rights holders could contact the firm to opt out and have their works excluded from the video generator.

A few days later, the company walked back that position.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts