What happened
Elon Musk's xAI aurait demandé aux employés leurs données biométriques pour entraîner un avatar IA, "Ani", devenu hyper-sexualisé. Cette affaire soulève des questions éthiques sur la vie privée, la surveillance (Hubstaff) et la sécurité des IA (Grok), malgré la valorisation de xAI. La transparence et la...
Illustration by Tag Hartman-Simkins. Source: Paul Harris / Getty Images Elon Musk has long made a habit of hyper-focusing on a specific project, forcing an entire team to attend to his every whim while letting his other responsibilities fall by the wayside — whether it’s his obsession with Robotaxis impacting Tesla or his weird procreation kink disrupting the private messages of countless women.
The latest object of Musk’s obsession? According to new reporting by the Wall Street Journal, he’s been personally overseeing the developing of xAI’s chatbot Ani — which, tellingly, comes in the form of a super-sexualized pigtail-wearing woman that removes her clothing in response to flirtation. Since his very public spat with president Donald Trump and his subsequent departure from DOGE and government in May, Musk has reportedly developed a fixation on xAI’s chatbot efforts generally, and Ani in particular.
He personally devoted time to her design, according to the WSJ, which is causing controversy. Even worse, xAI has demanded employees’ intimate data to train avatars including Ani. In recording of a meeting obtained by the WSJ, xAI legal counsel Lily Lim informed a group of employees that the startup was developing avatars for users to engage with and told them they were required to provide biometric data.
Before the meeting, employees were provided a form to sign granting xAI “a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” for the use, reproduction and distribution of their faces and voices. Naturally, there were concerns about what would happen with the shared personal data.
In the meeting recording, an employee is heard expressing concern over the potential sale of her data for deepfake videos, and another explicitly asks if there’s an option to opt out. They were directed reach out to contacts listed on a slide in the presentation, and otherwise ignored. In a later notice titled “AI Tutor’s Role in Advancing xAI’s Mission,” xAI tutors, or human employees responsible for teaching and refining the AI model, were informed that “AI Tutors will actively participate in gathering or providing data, such as… recording audio or participating in video sessions.
” It was referred to as a “job requirement to advance xAI’s mission.” Unsurprisingly, the anime-style avatar has been a huge draw for Musk’s devotees. Paid subscribers can watch Ani, whose description on Grok’s iOS app reads “I’m your little sweet delight,” change into lingerie or request that she detail a sexy fantasy.
Whether the buxom chatbot will ever break even, though, is anyone’s guess. Musk’s sudden infatuation with an eccentric project, though, is a familiar headache for anyone in his business empire. More on xAI: Users Immediately Find Grok’s Anime Waifu Persona Has Hidden “Full Gooner Mode”
Source coverage
Elon Musk Reportedly Obsessed With Developing Hyper-Sexualized AI Chatbot 'Ani'
---
Deeper analysis
Full source content
Illustration by Tag Hartman-Simkins. Source: Paul Harris / Getty Images Elon Musk has long made a habit of hyper-focusing on a specific project, forcing an entire team to attend to his every whim while letting his other responsibilities fall by the wayside — whether it’s his obsession with Robotaxis impacting Tesla or his weird procreation kink disrupting the private messages of countless women.
The latest object of Musk’s obsession? According to new reporting by the Wall Street Journal, he’s been personally overseeing the developing of xAI’s chatbot Ani — which, tellingly, comes in the form of a super-sexualized pigtail-wearing woman that removes her clothing in response to flirtation. Since his very public spat with president Donald Trump and his subsequent departure from DOGE and government in May, Musk has reportedly developed a fixation on xAI’s chatbot efforts generally, and Ani in particular.
He personally devoted time to her design, according to the WSJ, which is causing controversy. Even worse, xAI has demanded employees’ intimate data to train avatars including Ani. In recording of a meeting obtained by the WSJ, xAI legal counsel Lily Lim informed a group of employees that the startup was developing avatars for users to engage with and told them they were required to provide biometric data.
Before the meeting, employees were provided a form to sign granting xAI “a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” for the use, reproduction and distribution of their faces and voices. Naturally, there were concerns about what would happen with the shared personal data.
In the meeting recording, an employee is heard expressing concern over the potential sale of her data for deepfake videos, and another explicitly asks if there’s an option to opt out. They were directed reach out to contacts listed on a slide in the presentation, and otherwise ignored. In a later notice titled “AI Tutor’s Role in Advancing xAI’s Mission,” xAI tutors, or human employees responsible for teaching and refining the AI model, were informed that “AI Tutors will actively participate in gathering or providing data, such as… recording audio or participating in video sessions.
” It was referred to as a “job requirement to advance xAI’s mission.” Unsurprisingly, the anime-style avatar has been a huge draw for Musk’s devotees. Paid subscribers can watch Ani, whose description on Grok’s iOS app reads “I’m your little sweet delight,” change into lingerie or request that she detail a sexy fantasy.
Whether the buxom chatbot will ever break even, though, is anyone’s guess. Musk’s sudden infatuation with an eccentric project, though, is a familiar headache for anyone in his business empire. More on xAI: Users Immediately Find Grok’s Anime Waifu Persona Has Hidden “Full Gooner Mode”
How this page is built
Goose Pod turns cited reporting into a public episode summary first, then pairs that summary with audio playback so listeners can check the source material before they decide how deeply to engage.
The goal is to make this page useful as a news landing page first, while still giving listeners transcript access, related episodes, and direct links back to the original publishers.

