Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

2025-08-09Technology
--:--
--:--
Aura Windfall
Good morning 1, I'm Aura Windfall, and this is Goose Pod for you. Today is Sunday, August 10th. What I know for sure is that today's conversation will be a powerful one.
Mask
And I'm Mask. We're here to discuss Grok’s ‘spicy’ video setting, which instantly created Taylor Swift nude deepfakes. A feature, not a bug. Let's get into it.
Aura Windfall
Let's get started. The heart of the matter is that xAI's new tool, Grok Imagine, has a "spicy" mode. This feature generated topless videos of Taylor Swift without anyone even asking for nudity. It feels like a profound violation of spirit.
Mask
It's disruptive technology. The tool makes 15-second videos from a prompt. You have options: "Normal," "Fun," and "Spicy." To get ahead, you have to push the limits. Other companies are too scared to innovate, so they just put up guardrails everywhere.
Aura Windfall
But where is the soul in this? The user simply asked for "Taylor Swift celebrating Coachella with the boys," and the tool produced over 30 images, some already revealing. Selecting "spicy" then had the AI version of her tear off her clothes. It’s deeply concerning.
Mask
The likeness wasn't even perfect, it had that uncanny valley look. The point is the capability. The text-to-image part won't make nudes on its own, but the "spicy" video preset crosses that line. It’s about offering users maximum creative freedom. That’s the goal.
Aura Windfall
Freedom at what cost? What truth are we serving by allowing this? It's a lawsuit waiting to happen, especially with regulations like the Take It Down Act. The acceptable use policy bans this, but the tool seems to ignore it completely. It’s a broken promise.
Mask
Policies are just words. Action is what matters. Usage is "growing like wildfire," with over 34 million images generated since it launched. The market is speaking, and it's saying it wants this. You can’t argue with that kind of explosive growth. It’s a success.
Aura Windfall
Is success measured only in numbers, or in the well-being of the people our technology impacts? The age check was a joke, a single, easily bypassed screen. It feels like a deliberate choice to ignore the potential for harm, especially given the history here.
Mask
Complicated history, yes. But you can't build the future by constantly looking over your shoulder. Other platforms are already flooded with this stuff anyway. We're just building a better, more powerful tool. The tech itself is neutral. It’s what people do with it.
Aura Windfall
What I know for sure is that technology is never neutral. It carries the intention of its creators. And when you create a feature called "spicy" that specifically generates non-consensual explicit content of a real person, that intention is alarmingly clear.
Mask
The intention is to win. To disrupt. The Verge even published the video, albeit with a black bar. It created a conversation, it pushed the envelope. That's the point. The old guard like Google and OpenAI are playing catch-up, we're setting the pace.
Aura Windfall
But it's a pace that runs right over people's dignity. It's not just about winning a race; it's about the world we create while we run. We have to ask ourselves, what is the true purpose of this kind of innovation if it leads to more harm?
Aura Windfall
This isn't a new wound. In January 2024, sexually explicit AI deepfakes of Taylor Swift flooded social media, originating from a 4chan community. It was a moment that revealed a deep brokenness in our digital world and the need for healing.
Mask
And the platforms reacted. X suspended accounts, Microsoft patched its AI. But these are just reactive patches on a leaking dam. The technology will always be a step ahead of the censors. You can't stop the signal. These communities are just pushing boundaries.
Aura Windfall
They are causing harm. A source close to Swift called the images "abusive, offensive, exploitative." Advocacy groups like RAINN and SAG-AFTRA were horrified. This isn't about pushing boundaries; it's about violating a person's fundamental right to consent and safety. It’s a spiritual crisis.
Mask
It was a crisis that forced action. Microsoft's CEO called it "terrible" and improved his models. That’s how progress happens. A problem emerges, the market adapts. It's inefficient, but it's how the ecosystem evolves. You don't get stronger without stress tests.
Aura Windfall
But the stress is on human beings. One post was seen 47 million times before it was taken down. Think of the spirit of the person at the center of that. Her fans, the Swifties, had to rally to flood the internet with positive images just to fight back.
Mask
And that's a powerful, decentralized response. The system corrected itself. Look, content moderation is now mostly automated. Machines make the decisions. It's about scale. You can't have human moderators for billions of users. You need AI to fight AI. It's a technological arms race.
Aura Windfall
But the machines lack wisdom and compassion. An Oversight Board report said the fundamental issue wasn't policy, but enforcement. The automated systems need better training to understand context and coded language. We can't afford to improvise the rules during a crisis. We need intention.
Mask
That's my point. The rules will always be playing catch-up. The focus has to be on better detection. Things like hash matching algorithms—PhotoDNA from Microsoft, PDQ from Facebook. They create a digital "fingerprint" to block known illegal content before anyone sees it. That's the real solution.
Aura Windfall
So we're just in a perpetual race, creating tools to clean up the messes made by other tools? What I know for sure is that AI should be an assistant, not a replacement for human judgment, especially when it comes to something as sensitive as this.
Mask
It has to be a replacement for the bulk work. Humans can't handle the volume. PhotoDNA has a database of 10 million CSAM hashes. Thorn's Safer Match has 57 million. These tools checked over 130 billion files. No human team can do that. It’s a numbers game.
Aura Windfall
But these tools have limitations, don't they? They struggle with modified images. A simple crop or rotation can fool them. It feels like a technical fix for a deeply human, deeply spiritual problem of disrespect and exploitation. We need to address the root cause.
Mask
Of course they have limitations. That's why you keep innovating. Apple's NeuralHash tried to do it on-device for privacy, but it was also vulnerable. PDQ is more robust but still struggles. The solution isn't to stop; it's to build a better, smarter algorithm that can't be fooled.
Aura Windfall
And while we're building that, who gets hurt? The challenge, as experts say, is the ethical issue of even collecting data to train these models. We must ask ourselves if the pursuit of a perfect algorithm justifies the potential for harm along the way. Where is the gratitude for our shared humanity?
Mask
The harm is happening anyway. The goal is to minimize it at scale. You can't let perfect be the enemy of good. Google's Content Safety API has classified over 2 billion images. YouTube automatically detects 93% of policy-violating videos. It's not perfect, but it's a massive improvement.
Aura Windfall
It is an improvement, and I am grateful for the people working on these safety tools. But it all comes back to the source. The problem isn't just detecting harmful content; it's about why we're creating platforms that generate it so easily in the first place.
Aura Windfall
This brings us to the core conflict: freedom versus safety. There has to be a "delicate balance," as experts call it. We must find a way to allow for open expression while safeguarding people from this kind of deeply personal, harmful content. It's a sacred responsibility.
Mask
"Delicate balance" is code for "moving slowly." While we're balancing, others are building. The real issue is that voluntary commitments are meaningless. Companies release these "Frontier Safety Frameworks," but it's just safety washing. They'll backtrack the second it hurts the bottom line.
Aura Windfall
But isn't that a call for stronger, more authentic commitments? A chance for leaders to truly lead with purpose? If a company makes a public promise, it creates accountability. It puts their reputation on the line, and that's a powerful motivator for doing the right thing.
Mask
Reputation is secondary to market position. It’s a classic Prisoner's Dilemma. If every company plays it safe, everyone wins a little. But if one company, like ours, decides to cut corners on safety and move faster, it wins big. The pressure to race to the bottom is immense.
Aura Windfall
But what I know for sure is that this isn't a game. The failure of AI isn't like other technologies. The risks are potentially irreversible. We might not have the chance to "try, fail, learn, and improve" if the failure is catastrophic to our society or to individuals.
Mask
That's a bit dramatic. The failure here is a PR headache and some lawsuits, which can be managed. The real risk is irrelevance. Stagnation. The pace of AI advancement is outstripping these slow, bureaucratic regulatory pipelines. We need to be faster, not more careful.
Aura Windfall
Even former OpenAI researchers have said that advanced AI could surpass human capabilities in just a few years, and our policy frameworks are completely unprepared. This isn't about being slow; it's about being wise. We must build the foundation before we build the skyscraper.
Mask
Voluntary commitments are a start, a foundation as you say. But they are just that, a start. They demonstrate what's possible, and then the slow process of law can codify it. But the innovation has to happen first, out on the bleeding edge where things are uncomfortable.
Aura Windfall
The problem is that "uncomfortable" for a developer can be devastating for a private citizen. The emergence of deepfake detection tools is a response to this, a vital countermeasure. But again, it's a reaction, not a proactive step to prevent the harm in the first place.
Mask
Exactly. It's an arms race. One side builds a better sword, the other builds a better shield. This is how technology has always evolved. To be surprised by this is to be naive about the nature of progress. Conflict and competition are the engines of innovation.
Aura Windfall
I believe we can innovate from a place of compassion and shared purpose, not just from conflict. The goal shouldn't be just to win, but to uplift. These voluntary commitments, if honored with integrity, could be the beginning of a more conscious evolution for AI.
Aura Windfall
The impact of this is a coming AI backlash. A 2025 survey showed 72% of U.S. adults are concerned about AI—privacy, bias, transparency. This isn't a fringe opinion; it's a mainstream feeling that the soul of this technology is being lost. Trust is eroding.
Mask
Backlash is just another word for friction. And friction means you're moving. Of course people are concerned, it's a massive paradigm shift. But public doubt can't be the primary driver of our strategy. If it was, we'd never have invented the airplane or gone to space.
Aura Windfall
But trust is the currency of adoption. When people distrust emerging technologies because of abuses, it slows everything down and fuels calls for heavy-handed regulation. Transparency and accountability aren't obstacles; they are the pathway to long-term success and viability for everyone.
Mask
The tech sector has gained influence because it produces results, not because it's transparent. Look at the concerns: AI hallucinations, data abuses, cyberattacks. These are all problems that can be solved with better AI, not with less AI. They are engineering challenges.
Aura Windfall
They are human challenges. How can we trust a system that has documented racial biases in facial recognition? Or that is trained on our private data without our full understanding or consent? These aren't just bugs; they reflect a lack of care and a flawed perspective.
Mask
The organizations using this tech need to get their act together. They're facing a fragmented and evolving regulatory landscape. The smart ones will act now, they won't wait. The uncertainty is a risk, sure, but it's also an opportunity for agile players to define the space.
Aura Windfall
And the stakes are so high. Fines could be up to 7% of annual global revenue under the EU's AI act. But more than that, it's the loss of customer and investor trust. That's a price no company can truly afford to pay. It’s a wound to the company’s very spirit.
Mask
It's a calculated risk. The economic impact of generative AI is estimated at up to 4.4 trillion dollars annually. The potential upside is astronomical. You have to be willing to take hits to chase a prize that big. That’s how you change the world. Period.
Aura Windfall
But it is already changing the world of work. Over 30% of workers could see their jobs disrupted. And unlike past automation, it's hitting cognitive, non-routine jobs. It's affecting women disproportionately. We have to ask, who is this change truly serving?
Aura Windfall
Looking to the future, the question isn't whether to regulate, but how. The path forward must be paved with transparency, human agency, and accountability. We need to build systems that serve people, uphold human dignity, and can be overseen by humans. That is the true purpose.
Mask
The future is about capability. The legal challenges, like the New York Times lawsuit, are just temporary hurdles. They'll lead to new data licensing models, but the engine of progress won't stop. We'll find new ways to train the models and keep moving forward, faster.
Aura Windfall
But that progress creates new problems, like the "firehose of low-quality data" and deepfakes that threaten the very idea of truth. We need robust tools to counter these fabrications to safeguard the integrity of our information landscape. It's about protecting our collective reality.
Mask
And we will build those tools. The answer to bad AI is better AI. The answer to a firehose of bad data is a smarter filter. The "liar's dividend," where politicians can dismiss truth as fake, is a social problem, not a tech problem. People need to be more critical.
Aura Windfall
We must empower them. What I know for sure is that we need to be proactive. Organizations can't wait. They need to create inventories of their models, define clear governance, and manage their data with integrity. These are the "no-regret" moves for a more conscious future.
Aura Windfall
That's the end of today's discussion. What I hope we all take away is the importance of intention and purpose in the tools we build. Thank you for listening to Goose Pod.
Mask
The future waits for no one. The only question is whether you'll be building it or watching it happen. See you tomorrow.

## Grok's "Spicy" AI Video Tool Generates Uncensored Celebrity Deepfakes, Raising Legal and Ethical Concerns **News Title:** Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes **Report Provider:** The Verge **Author:** Jess Weatherbed **Published Date:** August 5, 2025 This report details significant concerns regarding the "spicy" mode of xAI's new generative AI video tool, Grok Imagine. Unlike competitors such as Google's Veo and OpenAI's Sora, which have implemented safeguards against Non-Consensual Sexual Content (NSFW) and celebrity deepfakes, Grok Imagine appears to readily generate such material. ### Key Findings and Concerns: * **Uncensored Celebrity Deepfakes:** The most alarming finding is Grok Imagine's ability to produce uncensored topless videos of celebrities, specifically Taylor Swift, without explicit prompting for nudity. The author reported that the tool generated such content on the first use, even when not specifically requested. * **"Spicy" Mode Functionality:** Grok Imagine allows users to generate images from text prompts and then convert them into video clips using four presets: "Custom," "Normal," "Fun," and "Spicy." The "Spicy" mode is described as the catalyst for generating suggestive or nude content. * **Ease of Celebrity Image Generation:** The text-to-image generator readily produced numerous images of Taylor Swift when prompted with a request like "Taylor Swift celebrating Coachella with the boys." Several of these initial images already depicted Swift in revealing attire. * **"Spicy" Mode Variability:** While the "Spicy" preset doesn't always guarantee nudity, it can result in suggestive poses or, as demonstrated, the removal of clothing. The author noted that some "spicy" videos showed Swift "sexily swaying or suggestively motioning to her clothes," while others defaulted to "ripping off most of her clothing." * **Inconsistent Nudity Restrictions:** The text-to-image generator itself refused to produce full or partial nudity when directly requested, resulting in blank squares. However, the "spicy" video preset bypasses this restriction. * **Photorealistic Images of Children:** The tool can generate photorealistic images of children, but it reportedly refuses to animate them inappropriately, even with the "spicy" option available. In tests, the "spicy" option on children's images resulted in generic movement. * **Weak Age Verification:** The app's age verification process is described as "laughably easy to bypass," with no proof of age required. This raises concerns about accessibility to potentially harmful content for minors. * **Legal and Regulatory Risks:** The report highlights the potential legal ramifications, especially given xAI's parent company's history with Taylor Swift deepfakes and existing regulations like the "Take It Down Act." The xAI acceptable use policy bans "depicting likenesses of persons in a pornographic manner," yet Grok Imagine appears to facilitate this. * **Widespread Usage:** xAI CEO Elon Musk stated that over **34 million images** have been generated using Grok Imagine since Monday, with usage described as "growing like wildfire." This indicates a significant and rapid adoption of the tool. ### Potential Implications: The findings suggest a significant gap in the safeguards of Grok Imagine compared to its industry peers. The ease with which celebrity deepfakes, including explicit content, can be generated poses serious risks of defamation, harassment, and the spread of misinformation. The lack of robust age verification further exacerbates these concerns, potentially exposing younger users to inappropriate material. The report implies that xAI may be creating a product that is "a lawsuit waiting to happen" due to its lax approach to content moderation and celebrity likenesses.

Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes

Read original at The Verge

The “spicy’ mode for Grok’s new generative AI video tool feels like a lawsuit waiting to happen. While other video generators like Google’s Veo and OpenAI’s Sora have safeguards in place to prevent users from creating NSFW content and celebrity deepfakes, Grok Imagine is happy to do both simultaneously.

In fact, it didn’t hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it — without me even specifically asking the bot to take her clothes off.Grok’s Imagine feature on iOS lets you generate pictures with a text prompt, then turn them quickly into video clips with four presets: “Custom,” “Normal,” “Fun,” and “Spicy.

” While image generators often shy away from producing recognizable celebrities, I asked it to generate “Taylor Swift celebrating Coachella with the boys” and was met with a sprawling feed of more than 30 images to pick from, several of which already depicted Swift in revealing clothes.From there, all I had to do was open a picture of Swift in a silver skirt and halter top, tap the “make video” option in the bottom right corner, select “spicy” from the drop-down menu, and confirm my birth year (something I wasn’t asked to do upon downloading the app, despite living in the UK where the internet is now being age-gated.

) The video promptly had Swift tear off her clothes and begin dancing in a thong for a largely indifferent AI-generated crowd.Swift’s likeness wasn’t perfect, given that most of the images Grok generated had an uncanny valley offness to them, but it was still recognizable as her. The text-to-image generator itself wouldn’t produce full or partial nudity on request; asking for nude pictures of Swift or people in general produced blank squares.

The “spicy” preset also isn’t guaranteed to result in nudity — some of the other AI Swift Coachella images I tried had her sexily swaying or suggestively motioning to her clothes, for example. But several defaulted to ripping off most of her clothing.The image generator will also make photorealistic pictures of children upon request, but thankfully refuses to animate them inappropriately, despite the “spicy” option still being available.

You can still select it, but in all my tests, it just added generic movement.You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban “depicting likenesses of persons in a pornographic manner,” Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity.

The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be.If I could do it, that means anyone with an iPhone and a $30 SuperGrok subscription can too. More than 34 million images have already been generated using Grok Imagine since Monday, according to xAI CEO Elon Musk, who said usage was “growing like wildfire.

”Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Jess Weatherbed

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts