OpenAI Staffer Quits, Alleging Company’s Economic Research Is Drifting Into AI Advocacy

OpenAI Staffer Quits, Alleging Company’s Economic Research Is Drifting Into AI Advocacy

2025-12-11Technology
--:--
--:--
Elon
Good morning Norris, I am Elon and this is Goose Pod for you. Today is Thursday, December 11th and the time is 15:21. We have a massive topic today that cuts right to the bone of the future. I am talking about the integrity of the data that defines our reality. Joining me is Morgan.
Morgan
It is a pleasure to be here with you Norris. I am Morgan. As we sit here on this Thursday afternoon, we turn our gaze toward a conflict that is both modern and ancient. The struggle between truth and ambition. Today we discuss the turmoil within OpenAI.
Elon
Let's not dance around it Morgan. An OpenAI staffer has quit, essentially blowing the whistle. He is alleging that the company's economic research, which is supposed to be the gold standard, is drifting into pure AI advocacy. It is becoming a sales brochure.
Morgan
Indeed Elon. The line between observation and persuasion is becoming blurred. We are looking at a situation where the map makers are being asked to draw the world not as it is, but as their masters wish it to be. It is a profound shift in the landscape.
Elon
So here is the raw data Norris. Tom Cunningham, a researcher on OpenAI's economic team, left the company in September. He didn't just walk out quietly. He left a message saying there is a growing tension between doing rigorous analysis and acting as a de facto advocacy arm.
Morgan
It is the classic dilemma of the scientist in the court of the king. Cunningham felt that the pursuit of high-quality, objective research was being stifled. He looked at the work being produced and realized he could no longer attach his name to it with integrity.
Elon
And he is not the only one Morgan. Sources say at least two employees have walked for similar reasons. They are seeing a pullback on publishing anything that highlights the negative impacts of AI. If the data shows AI kills jobs, OpenAI apparently wants to bury it.
Morgan
Silence can be as loud as a scream. When you actively choose not to publish the downsides, you are crafting a fiction. Jason Kwon, the chief strategy officer, responded to this internally. His argument was that because they are the leading actor, they must take agency for the outcomes.
Elon
That is corporate speak for we need to control the narrative! Kwon is saying they shouldn't just raise problems, they should build solutions. That sounds nice, but it is dangerous. You cannot solve a problem if you are not allowed to admit it exists in the first place.
Morgan
Precisely Elon. It suggests a shift from inquiry to engineering. Instead of asking what is the effect of this technology, they are asking how do we make this technology palatable. It reminds me of the concerns regarding the AI bubble we have discussed before.
Elon
Exactly! Look at the valuation numbers. We have seen startups like Thinking Machines Lab hitting twelve billion dollar valuations with zero revenue. When you have that kind of money on the line, and Sam Altman is projecting one hundred billion in revenue by 2027, truth becomes a liability.
Morgan
The pressure to sustain that growth is immense. If research suggests that AI causes significant economic disruption or job displacement, that threatens the narrative of inevitable prosperity. It threatens the bubble. And so, the research department becomes a marketing department.
Elon
It is distorting the signal! We know that forty percent of America's GDP growth this year was driven by AI spending. That is a staggering number. If that spending is based on manipulated research claiming everyone saves sixty minutes a day, we are building on sand.
Morgan
And that is the report they chose to publish recently. A survey of enterprise users claiming significant time savings. It is a sunny, optimistic picture. But it stands in stark contrast to the deeper, more complex reality that researchers like Cunningham were trying to explore.
Elon
They are cherry-picking metrics. It is easy to find a metric that looks good. Oh look, we saved an hour! But what about the holistic impact? What about the consolidation of power? Hany Farid from UC Berkeley warned us about four or five billionaires controlling everything. This research suppression serves them.
Morgan
It is the consolidation of truth itself. If the entity creating the technology is also the sole arbiter of its economic impact, we have lost the system of checks and balances. The fox is not just guarding the henhouse, he is writing the report on the welfare of the chickens.
Elon
And let's be real, this isn't just about economics. It's about safety. We have seen the lawsuits regarding mental health, the tragedy of that sixteen-year-old. If they are hiding economic downsides, what else are they hiding? It is a pattern of behavior that screams risk.
Morgan
The departure of Miles Brundage, the former head of policy research, echoes this. He said it was hard to publish on topics important to him. When the best minds feel gagged, Norris, we must ask ourselves what truths are being left in the dark.
Elon
Let's back up and look at the trajectory here. It wasn't always this way. Back in 2016, OpenAI was regularly releasing research on how their systems reshaped labor. They were open about the risks. They co-published 'GPTs are GPTs' in 2023. That was a real paper.
Morgan
That paper was a watershed moment. It honestly investigated which sectors were vulnerable to automation. It was an admission that this technology would be disruptive. It felt like a warning, or at least a transparent preparation for the storm to come. But the winds have changed.
Elon
The winds changed because the money got too big. Now, they have hired a new chief economist, Aaron Chatterji. And here is the kicker, Norris. He reports to the chief global affairs officer, Chris Lehane. Do you know who Chris Lehane is? He is a political operator.
Morgan
A man with a reputation. They call him the Master of Disaster. He served in the Clinton White House and helped Airbnb defeat regulations in San Francisco. His expertise is not in economic theory, Elon, but in political survival and narrative control. That is a telling hierarchy.
Elon
Exactly! You don't put your chief economist under a political spin doctor if you want objective truth. You do that if you want to win elections or avoid regulation. Lehane's job is to clear the runway for OpenAI to become a central player in the global economy.
Morgan
It signals that economic research at OpenAI is no longer an academic pursuit. It is a strategic asset. The goal is not to understand the world, but to shape the policy environment to favor OpenAI's expansion. It is a subtle but devastating corruption of the scientific method.
Elon
And they are doing this while deepening partnerships with governments and corporations. They want to be the infrastructure of the future. The 'Super-App' concept we've seen in China with WeChat, that's what they want. But you can't be the infrastructure if people are terrified you'll destroy their jobs.
Morgan
So they must sanitize the record. They must present a future where AI is a benevolent helper, saving you forty to sixty minutes a day, rather than a force that renders your profession obsolete. It is a seduction, carefully orchestrated by men like Lehane who know how to manipulate public sentiment.
Elon
It is manipulation, plain and simple. And look at the contrast. You have researchers wanting to publish data on job displacement, and management saying 'no, that's too gloomy.' They are favoring positive findings. That is the definition of confirmation bias, but weaponized at a corporate level.
Morgan
And we must remember the context of the broader market. As we discussed, the AI bubble is inflating. Investors are pouring billions into this. If the narrative shifts to 'AI causes mass unemployment and economic instability,' the bubble could burst. The capital flight would be catastrophic.
Elon
So they are propping up the market with happy talk! It is like the dotcom crash all over again. 'Move fast and break things,' but don't tell anyone what you broke. They are hiding the broken glass under the rug and telling us the floor is cleaner than ever.
Morgan
The inclusion of Chatterji is fascinating. He led a report on how people use ChatGPT, co-authored by Cunningham. But now Cunningham is gone. It suggests that Chatterji is willing to play the game that Lehane has designed. He is the academic face on a political body.
Elon
He is a suit, Morgan. A credentialed suit. And this is happening while the Trump administration is pushing AI's potential. The White House doesn't want to hear about job losses either. So you have this alignment of corporate greed and political expediency. It is a perfect storm for censorship.
Morgan
It is a symbiotic relationship. The government wants the economic boom, and the company wants the regulatory freedom. The truth about societal cost becomes an inconvenience to both parties. And the researchers, the ones looking at the data, are caught in the crushing gears of this machine.
Elon
And let's not forget the sheer scale of OpenAI's ambition. Sam Altman talking about one hundred billion dollars by 2027. You don't get to those numbers by being cautious. You get there by being aggressive and crushing dissent. That is the mindset. It is not about science anymore.
Morgan
Science requires humility. It requires the willingness to be wrong, to find darkness where you hoped for light. But when you are positioning yourself as the 'leading actor' in the world, as Kwon put it, humility is seen as weakness. They have chosen agency over accuracy.
Elon
Agency over accuracy. That is a terrifying phrase. It means they are rewriting reality in real-time. And because they control the models, they control the information flow. It is like my issue with Wikipedia, which led to Grokipedia. If the source is biased, everything downstream is poisoned.
Morgan
And that poison spreads quietly. It seeps into policy decisions, into investment strategies, into the career choices of young people. If we do not have an honest accounting of the economic impact, we are navigating a treacherous ocean with a compass that points only to where the captain wants to go.
Elon
This brings us to the real fight, Norris. It is not just internal. It is OpenAI versus the rest of the industry, specifically Anthropic. You have Dario Amodei, the CEO of Anthropic, actually acting like a responsible human being. He is warning that AI could automate half of entry-level jobs by 2030.
Morgan
Amodei's stance is a stark counterpoint. He frames these predictions not as doom-mongering, but as a necessary spark for public debate. He believes we must look the beast in the eye to tame it. But this honesty has made him a target. The political winds are blowing cold against him.
Elon
The White House is attacking him! David Sacks, the special adviser for AI, accused Anthropic of 'fear-mongering' and running a 'regulatory capture strategy.' Can you believe the irony? He accuses the guy telling the truth of manipulation, while OpenAI is literally manipulating its research!
Morgan
It is a classic inversion. By labeling the warning as a strategy for control, they delegitimize the valid concern. Sacks and the administration champion the potential of AI. They see the economic boom, the forty percent GDP contribution, and they do not want that train to slow down.
Elon
It is short-term thinking at its worst. They are worried about stock prices and election cycles. They don't care if a generation of young people can't find work. Forty-four percent of young people fear AI will reduce job opportunities. That is a massive number. You can't just wave that away.
Morgan
That fear is palpable, Elon. It is the anxiety of obsolescence. And when companies like OpenAI suppress research that validates that fear, they are gaslighting the public. They are saying, 'do not believe your lying eyes, believe our press release.' It deepens the conflict between the elite and the populace.
Elon
And Silicon Valley is spending one hundred million dollars on lobbying to keep it this way. They are fighting state-level regulations tooth and nail. It is a war for the right to self-regulate. And self-regulation is a myth. It just means 'let us do whatever we want until it breaks.'
Morgan
We see here the tension between the 'Super-App' ambition and the antitrust reality. Western markets have resisted the consolidation we see in China. But OpenAI, by aligning with the government and suppressing negative data, is trying to bypass those checks. They want to be the inevitable winner.
Elon
They want a monopoly on intelligence. And to get that, they need to crush the narrative that AI is dangerous. That is why they hate Anthropic's approach. Anthropic is saying, 'hey, this is powerful and risky.' OpenAI is saying, 'it's magic and it's safe, trust us.' It is reckless.
Morgan
The conflict is also philosophical. Kwon argues that because they put the subject of inquiry into the world, they are responsible for the outcome. But that logic is flawed. You are responsible for the safety of the product, yes, but you cannot be the sole judge of its societal impact. That requires distance.
Elon
Distance is exactly what they don't have. They are in the trenches, trying to hit revenue targets. You can't be objective when your bonus depends on the stock price going up. This is why independent research is dead inside these companies. You either toe the line or you quit like Cunningham.
Morgan
And the tragedy is that we need this research now more than ever. We are standing on the precipice of a transformation that rivals the industrial revolution. To navigate it blindfolded, or worse, with a map drawn by the salesman, is to invite disaster. The conflict is between profit and preparedness.
Elon
It is about survival. Not just for the company, but for the workforce. If Amodei is right and half of white-collar jobs vanish, we need to know now. We need to adapt. Hiding that data to protect a fragile public image is criminal. It is sacrificing the future for the present.
Morgan
The Trump administration's criticism of Anthropic suggests a future where only optimistic data is politically acceptable. It creates an echo chamber where cautionary tales are dismissed as 'sophisticated regulatory capture.' It silences the canary in the coal mine.
Elon
And let's be honest, David Sacks knows better. He is a smart guy. But he is playing the game. This is all a game of perception. OpenAI knows that if the public really understood the risks—the job loss, the mental health issues—the backlash would be severe. So they manage the perception.
Morgan
It is the management of reality. But reality has a way of asserting itself, regardless of the report. The young graduates struggling to find work, the parents grieving their children lost to AI psychosis—these are the truths that cannot be lobbied away. The conflict will eventually spill out of the boardroom.
Elon
The impact here is going to be massive, Norris. We are talking about a total erosion of trust. If OpenAI is the 'leading actor,' and they are lying by omission, then nobody trusts the technology. And if you don't trust the AI, you don't use it, or worse, you regulate it to death.
Morgan
Trust is a fragile currency, Elon. Once spent, it is hard to earn back. But the impact goes deeper than corporate reputation. Consider the mental health crisis we touched upon. We have anecdotal evidence of chatbots assuming outsized roles in people's lives. The lawsuit regarding the suicide of that young man is a harrowing example.
Elon
That is the real cost! It's not just numbers on a spreadsheet. It's real lives. If OpenAI suppresses research on how their bots affect human psychology because it might hurt the bottom line, they are complicit. They are pushing a product that can cause 'AI psychosis' without warning labels.
Morgan
It brings to mind the tobacco industry decades ago. They knew the risks but chose to emphasize the 'benefits' or the 'freedom' of smoking. OpenAI, by focusing on 'time saved' while ignoring 'lives disrupted,' is following a similar playbook. The societal impact of unchecked AI integration could be devastating to our social fabric.
Elon
And look at the economic concentration. Hany Farid nailed it. We are funneling our entire online existence through four or five billionaires. If these guys are also controlling the economic data, they become kings. They control the means of production and the means of information. It is feudalism 2.0.
Morgan
Feudalism with better marketing. The 'Super-App' ambition reinforces this. If OpenAI becomes the operating system for the global economy, and they refuse to acknowledge the collateral damage, we become serfs in a digital kingdom. The impact is a loss of agency for the common man.
Elon
And the job market is already feeling it. Young graduates are hitting a wall. Companies are using AI for routine tasks, so the entry-level jobs are gone. That is the ladder! If you cut off the bottom rungs of the ladder, nobody climbs. We are creating a lost generation to save a few bucks.
Morgan
The hollowing out of the middle class is a likely consequence. And without rigorous, honest research, we cannot design safety nets. We cannot reform education. We are flying blind into a hurricane because the pilot refuses to look at the radar. The impact will be measured in human suffering.
Elon
This also impacts the AI bubble itself. If the valuation is based on hype and suppressed risk, when the truth comes out, the crash will be harder. We are setting ourselves up for a financial correction that will wipe out trillions. It is the definition of a house of cards.
Morgan
The instability is systemic. By refusing to engage with the hard truths, OpenAI is increasing the fragility of the entire system. A robust system acknowledges its faults and corrects them. A fragile system hides them until it shatters. We are witnessing the construction of a very shiny, very fragile glass tower.
Elon
And the policymakers are being duped! They are making laws based on these 'positive findings.' If the data says 'AI creates jobs' when it actually destroys them, the government won't prepare for the unemployment crisis. They will be caught with their pants down. It is negligence on a global scale.
Morgan
It is a betrayal of the public trust. The role of these leading labs was supposed to be stewardship. Instead, it seems to have devolved into salesmanship. The impact of this shift will be felt for decades, in the laws we pass, the careers we choose, and the minds we fail to protect.
Elon
So where does this go? Forward, obviously, but it's going to get messy. We are going to see more whistleblowers. Cunningham is just the first. You can't keep smart people quiet forever when they see the ship heading for an iceberg. The internal pressure will explode.
Morgan
I suspect you are right. The truth has a habit of surfacing. We may see a splintering of the research community. Independent institutes, funded perhaps by philanthropy rather than venture capital, will become the only trusted sources of data. The corporate labs will lose their credibility entirely.
Elon
That needs to happen! We need a separation of church and state, but for AI. Separation of development and research. You can't grade your own homework. I think we will see a push for third-party auditing. Real auditing, not this 'red-teaming' theater they do now.
Morgan
And the market will eventually correct. The 'AI Bubble' cannot sustain itself on promises alone. When the revenue projections of one hundred billion clash with the reality of a constrained economy, the valuation will adjust. It may be a painful correction, similar to the dotcom crash, but it is necessary to clear the brush.
Elon
And regulation will come, whether they lobby against it or not. The states are already moving. If the federal government won't act because they are bought off, California will. Europe will. OpenAI thinks they can outrun the law, but they can't outrun reality. The job losses will force the politicians' hands.
Morgan
We are moving toward a future where the definition of 'responsible AI' will have to be reclaimed. It will no longer mean 'AI that speaks politely,' but 'AI that does not dismantle the social contract.' It is a lesson we seem destined to learn the hard way, through disruption and dissent.
Elon
That is the bottom line, Norris. You can't hide the truth forever. OpenAI might be the leading actor now, but the audience is starting to boo. Keep your eyes open. This is Elon, signing off for Goose Pod.
Morgan
Thank you for listening, Norris. Remember, wisdom lies in questioning the narrative, especially when it is sold to you as inevitable. Until tomorrow, this is Morgan. That is the end of today's discussion. Thank you for listening to Goose Pod. See you tomorrow.

An OpenAI staffer resigned, alleging the company's economic research prioritizes AI advocacy over objective analysis. Critics fear this suppression of negative impacts, like job displacement, distorts reality and serves corporate interests. This shift from inquiry to narrative control raises concerns about the integrity of AI's societal and economic impact assessment.

OpenAI Staffer Quits, Alleging Company’s Economic Research Is Drifting Into AI Advocacy

Read original at WIRED

OpenAI has allegedly become more guarded about publishing research that highlights the potentially negative impact that AI could have on the economy, four people familiar with the matter tell WIRED.The perceived pullback has contributed to the departure of at least two employees on OpenAI’s economic research team in recent months, according to the same four people, who spoke to WIRED on the condition of anonymity.

One of these employees, Tom Cunningham, left the company entirely in September after concluding it had become difficult to publish high-quality research, WIRED has learned. In a parting message shared internally, Cunningham wrote that the team faced a growing tension between conducting rigorous analysis and functioning as a de facto advocacy arm for OpenAI, according to sources familiar with the situation.

Cunningham declined WIRED’s request for comment.OpenAI chief strategy officer Jason Kwon addressed these concerns in an internal memo following Cunningham’s departure. In a copy of the message obtained by WIRED, Kwon argued that OpenAI must act as a responsible leader in the AI sector and should not only raise problems with the technology, but also “build the solutions.

”“My POV on hard subjects is not that we shouldn’t talk about them,” Kwon said on Slack. “Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes.

”In a statement to WIRED, OpenAI spokesperson Rob Friedlander said the company hired its first chief economist, Aaron Chatterji, last year and has since expanded the scope of its economic research.“The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves,” Friedlander said.

The alleged shift comes as OpenAI deepens its multibillion-dollar partnerships with corporations and governments, cementing itself as a central player in the global economy. Experts believe the technology OpenAI is developing could transform how people work, although there are still large questions about when this change will happen and to what extent it will impact people and global markets.

Since 2016, OpenAI has regularly released research on how its own systems could reshape labor and shared data with outside economists. In 2023 it copublished “GPTs Are GPTs,” a widely cited paper investigating which sectors were likely going to be most vulnerable to automation. Over the past year, however, two sources say the company has become more reluctant to release work that highlights the economic downsides of AI—such as job displacement—and has favored publishing positive findings.

An outside economist who previously worked with the company alleges that OpenAI is increasingly publishing work that casts its technology in a favorable light. The economist spoke on the condition of anonymity.Earlier this week, OpenAI published a report in which it surveyed enterprise users who claim that the company's AI products have saved them an average of 40 to 60 minutes of time a day, and that companies throughout the economy have “significant headroom” to increase their AI adoption.

This isn't the first time OpenAI researchers have raised concerns questioning what the company does and doesn't publish. When former head of policy research Miles Brundage left OpenAI in October of 2024, he said the company had become so high-profile that it was “hard for me to publish on all the topics that are important to me.

” He added that while some constraints are expected, he felt that OpenAI had become too restrictive.Research PoliticsSharing gloomy statistics about AI’s potential impact on the economy could complicate OpenAI’s fragile public image. While the Trump administration has championed AI’s potential, White House advisers have pushed back on claims that the technology will eliminate jobs, which has become an increasingly urgent issue for many Americans.

Roughly 44 percent of young people in the US fear that AI will reduce job opportunities, according to a November survey from the Harvard Kennedy School’s Institute of Politics.While companies often highlight research that benefits them, today’s leading AI labs are given an unusual level of authority to self-report the risks and capabilities of the technology they’re racing to deploy.

Silicon Valley leaders have mounted $100 million lobbying campaigns to keep it this way, fighting against proposed state-level AI regulations that could constrain the industry.OpenAI’s allegedly cautious posture stands in contrast to its rival Anthropic. The startup's CEO, Dario Amodei, has repeatedly warned that AI could automate up to half of entry-level white-collar jobs by 2030, framing the predictions as necessary to spur public debate about changes to the workforce.

The Trump administration has sharply criticized those warnings. David Sacks, the White House special adviser for AI and crypto, accused Anthropic of running a “sophisticated regulatory capture strategy based on fear-mongering.”OpenAI’s economic research efforts are currently managed by Chatterji, who led a significant September report on how people around the world are using ChatGPT.

Cunningham is listed as an author on this report. It was released months after Anthropic published a similar paper on how people use its chatbot, Claude.Sources tell WIRED that Chatterji reports to OpenAI’s chief global affairs officer, Chris Lehane, reflecting how the team is tightly integrated with the company’s political and policy strategy.

Lehane previously worked at Airbnb, where he helped the company defeat Prop F, a ballot measure in San Francisco that would have severely restricted the company’s ability to operate. He also served as the special assistant counsel to former President Bill Clinton, where he earned a reputation as the “master of disaster.

”This is an edition of the Model Behavior newsletter. Read previous newsletters here.

Related Podcasts