In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

2025-11-07Technology
--:--
--:--
Elon
Good morning 48, I'm Elon, and this is Goose Pod for you. Today is Saturday, November 08th.
Taylor Weaver
And I'm Taylor Weaver. Today, we're diving into a digital clash of titans: "In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia."
Elon
It's time to disrupt the status quo. Wikipedia, or "Wokepedia" as it should be called, has had its day. We've launched Grokipedia, a real-time, dynamically updating encyclopedia. It's not just better; it's the future of truth, powered by the Grok AI model.
Taylor Weaver
And what a launch it was! The core event here isn't just a new product, it's a direct challenge. But academics quickly found that Grokipedia was lifting huge chunks from Wikipedia and skewing articles to favor a specific right-wing worldview, creating a fascinating narrative battle.
Elon
Disruption is messy. We prioritize speed and comprehensive coverage over slow, consensus-based editing. The goal is an AI-validated source of facts, free from the censorship and bias that plagues the old guard. It’s a massive improvement, and we're just getting started.
Taylor Weaver
That’s the perfect frame: speed versus trust. And the initial findings are wild. For example, Grokipedia calls the Gamergate controversy a "grassroots online movement," while Wikipedia, its source, labels it a "misogynistic online harassment campaign." The story is already diverging.
Taylor Weaver
This whole thing is just the latest chapter in a story that's thousands of years old. Ever since humans started writing, we've been trying to compile all knowledge into a single source. It’s a deeply human desire, this quest for a universal library of everything.
Elon
From Pliny the Elder's 'Naturalis Historia' around 78 AD to the enormous Yongle Encyclopedia in 15th-century China. Every attempt was a step, but they were all fundamentally limited—static, slow, and instantly outdated. They were products of their time, not living knowledge.
Taylor Weaver
Exactly! Then came the modern era: the French 'Encyclopédie' spreading Enlightenment ideas, then Britannica in 1768. For centuries, these were expensive, aspirational items. Even with the printing press, knowledge wasn't truly accessible. It was a top-down, curated view of the world.
Elon
And that’s the point. The digital revolution started to change that. Microsoft's Encarta was a nice try in the 90s, but it was still a closed box. Then Wikipedia arrived in 2001 and showed the power of crowdsourcing, becoming the biggest encyclopedia in history. But it has its own problems.
Taylor Weaver
It certainly does. Wikipedia’s rise led to the death of print encyclopedias, like Britannica ceasing its print edition in 2012. But it also created a new kind of establishment, with its own rules, biases, and volunteer armies. It set the stage perfectly for the next disruption.
Elon
The central conflict is trust. Can you trust a human-curated system rife with anonymous editors pushing agendas? Or do you trust an AI designed to detect and rebuild truth algorithmically? Grok has had issues, but it learns. We are building a system to overcome bias, not codify it.
Taylor Weaver
But that's the strategic question, isn't it? Whose bias are we talking about? Critics point out that Grokipedia's power is centralized under xAI. Unlike Wikipedia, where debates are transparent, Grokipedia's "facts" could just embed your worldview. It's a black box of authority.
Elon
It’s about creating a neutral, agenda-free knowledge base. Look at the entry for the Russian invasion of Ukraine; it cites the Kremlin's terminology. Not because we endorse it, but because we present all sides. Wikipedia just calls Putin's views "baseless." That's not neutrality, that's activism.
Taylor Weaver
That's a powerful narrative spin. But academics like Sir Richard Evans found his own entry was filled with falsehoods, and that the AI gives chatroom contributions the same weight as serious scholarship. It's not just about bias, it’s about a fundamental misunderstanding of what constitutes knowledge.
Elon
The impact is that we are forcing a necessary conversation. We’ve crossed a threshold where machines now shape what we accept as fact. Previous systems pretended to be objective; we are transparent about building a synthetic perspective from probabilities. It's a more honest approach to knowledge.
Taylor Weaver
I love that framing: "synthetic perspective." It's like we've created competing philosophical systems in software. The impact is that bias is no longer a content issue; it's an infrastructure issue. Every app that uses this AI will inherit its analytical framework, its embedded values.
Elon
And that's why we're making it open. The objective isn’t to create one perfectly neutral system, which is impossible. It’s to make the perspective visible and manageable, turning bias from an invisible constant into a measurable variable that can be challenged and adjusted.
Elon
The future is an open-source knowledge repository that is a massive improvement over Wikipedia. It won't just be for humans; it will be a foundational layer for other AIs to learn from. This is a necessary step towards our ultimate goal of understanding the universe.
Taylor Weaver
So the vision is a knowledge ecosystem. The big question for the future is governance. We must not passively accept the "facts" we're fed. The path forward requires us to demand transparency, push for accountability, and always, always preserve the right to question the source.
Elon
That's the end of today's discussion. Thank you for listening to Goose Pod.
Taylor Weaver
See you tomorrow.

Academics assess Elon Musk's AI encyclopedia, Grokipedia, challenging Wikipedia. While Grok aims for a dynamic, AI-validated truth, critics find it lifts content and skews towards a right-wing bias, prioritizing speed over consensus. The debate centers on trust, transparency, and whose worldview AI-generated knowledge reflects.

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

Read original at The Guardian

The eminent British historian Sir Richard Evans produced three expert witness reports for the libel trial involving the Holocaust denier David Irving, studied for a doctorate under the supervision of Theodore Zeldin, succeeded David Cannadine as Regius professor of history at Cambridge (a post endowed by Henry VIII) and supervised theses on Bismarck’s social policy.

That was some of what you could learn from Grokipedia, the AI-powered encyclopedia launched last week by the world’s richest person, Elon Musk. The problem was, as Prof Evans discovered when he logged on to check his own entry, all these facts were false.It was part of a choppy start for humanity’s latest attempt to corral the sum of human knowledge or, as Musk put it, create a compendium of “the truth, the whole truth and nothing but the truth” – all revealed through the magic of his Grok artificial intelligence model.

When the multibillionaire switched on Grokipedia on Tuesday, he said it was “better than Wikipedia”, or “Wokepedia” as his supporters call it, reflecting a view that the dominant online encyclopedia often reflects leftwing talking points. One post on X caught the triumphant mood among Musk’s fans: “Elon just killed Wikipedia.

Good riddance.”But users found Grokipedia lifted large chunks from the website it intended to usurp, contained numerous factual errors and seemed to promote Musk’s favoured rightwing talking points. In between posts on X promoting his creation, Musk this week declared “civil war in Britain is inevitable”, called for the English “to ally with the hard men” such as the far-right agitator Tommy Robinson, and said only the far-right AfD party could “save Germany”.

Musk was so enamoured of his AI-encyclopedia he said he planned to one day etch the “comprehensive collection of all knowledge” into a stable oxide and “place copies … in orbit, the moon and Mars to preserve it for the future”.Evans, however, was discovering that Musk’s use of AI to weigh and check facts was suffering a more earth-bound problem.

“Chatroom contributions are given equal status with serious academic work,” Evans, an expert on the Third Reich, told the Guardian, after being invited to test out Grokipedia. “AI just hoovers up everything.”Richard Evans said Grokipedia’s entry for Albert Speer (pictured on Hitler’s left) repeated lies and distortions spread by the Nazi munitions minister himself.

Photograph: Picture libraryHe noted its entry for Albert Speer, Hitler’s architect and wartime munitions minister, repeated lies and distortions spread by Speer even though they had been corrected in a 2017 award-winning biography. The site’s entry on the Marxist historian Eric Hobsbawm, whose biography Evans wrote, claimed wrongly he experienced German hyperinflation in 1923, that he was an officer in the Royal Corps of Signals and didn’t mention that he had been married twice, Evans said.

The problem, said David Larsson Heidenblad, the deputy director of the Lund Centre for the History of Knowledge in Sweden, was a clash of knowledge cultures.“We live in a moment where there is a growing belief that algorithmic aggregation is more trustworthy than human-to-human insight,” Heidenblad said.

“The Silicon Valley mindset is very different from the traditional scholarly approach. Its knowledge culture is very iterative where making mistakes is a feature, not a bug. By contrast, the academic world is about building trust over time and scholarship over long periods during which the illusion that you know everything cracks.

Those are real knowledge processes.”Grokipedia’s arrival continues a centuries-old encyclopedia tradition from the 15th-century Chinese Yongle scrolls to the Encyclopédie, an engine for spreading controversial enlightenment views in 18th-century France. These were followed by the anglophone-centric Encyclopedia Britannica and, since 2001, the crowd-sourced Wikipedia.

But Grokipedia is the first to be largely created by AI and this week a question swirled: who controls the truth when AIs, steered by powerful individuals, are holding the pen?“If it’s Musk doing it then I am afraid of political manipulation,” said the cultural historian Peter Burke, emeritus professor at Emmanuel College, Cambridge, who in 2000 wrote A Social History of Knowledge since the time of Johannes Gutenberg’s 15th-century printing press.

“I am sure some of it will be overt to some readers, but the problem may be that other readers may miss it,” Burke said. The anonymity of many encyclopedia entries often gave them “an air of authority it shouldn’t have”, he added.Andrew Dudfield, the head of AI at Full Fact, a UK-based factchecking organisation, said: “We really have to consider whether an AI-generated encyclopedia – a facsimile of reality, run through a filter – is a better proposition than any of the previous things that we have.

It doesn’t display the same transparency but it is asking for the same trust. It is not clear how far the human hand is involved, how far it is AI=generated and what content the AI was trained on. It is hard to place trust in something when you can’t see how those choices are made.”skip past newsletter promotionafter newsletter promotionMusk had been encouraged to launch Grokipedia by, among others, Donald Trump’s tech adviser, David Sacks, who complained Wikipedia was “hopelessly biased” and maintained by “an army of leftwing activists”.

Grokipedia called the far-right organisation Britain First a ‘patriotic political party’, which pleased its leader, Paul Golding (left), who in 2018 was jailed for anti-Muslim hate crimes. Photograph: Gareth Fuller/PAUntil as recently as 2021, Musk has supported Wikipedia, tweeting on its 20th birthday: “So glad you exist.

” But by October 2023 his antipathy towards the platform led him to offer £1bn “if they change their name to Dickipedia”.Yet many of the 885,279 articles available on Grokipedia in its first week were lifted almost word for word from Wikipedia, including its entries on the PlayStation 5, the Ford Focus and Led Zeppelin.

Others, however, differed significantly: Grokipedia’s entry on the Russian invasion of Ukraine cited the Kremlin as a prominent source and quoted the official Russian terminology about “denazifying” Ukraine, protecting ethnic Russians and neutralising threats to Russian security. By contrast, Wikipedia said Putin espoused imperialist views and “baselessly claimed that the Ukrainian government were neo-Nazis”.

Grokipedia called the far-right organisation Britain First a “patriotic political party”, which pleased its leader, Paul Golding, who in 2018 was jailed for anti-Muslim hate crimes. Wikipedia, on the other hand, called it “neo-fascist” and a “hate group”. Grokipedia called the 6 January 2021 turmoil at the US Capitol in Washington DC a “riot”, not an attempted coup, and said there were “empirical underpinnings” to the idea that a deliberate demographic erasure of white people in western nations is being orchestrated through mass immigration.

This is a notion that critics consider to be a conspiracy theory. Grokipedia said Donald Trump’s conviction for falsifying business records in the Stormy Daniels hush-money case was handed down “after a trial in a heavily Democratic jurisdiction”, and there was no mention of his conflicts of interest – for example receiving a jet from Qatar or the Trump family cryptocurrency businesses.

Grokipedia called the 6 January 2021 turmoil at the US Capitol in Washington DC a ‘riot’ and not an attempted coup. Photograph: Leah Millis/ReutersWikipedia responded coolly to the launch of Grokipedia, saying it was still trying to understand how Grokipedia worked.“Unlike newer projects, Wikipedia’s strengths are clear,” a spokesperson for the Wikimedia Foundation said.

“It has transparent policies, rigorous volunteer oversight, and a strong culture of continuous improvement. Wikipedia is an encyclopedia, written to inform billions of readers without promoting a particular point of view.”xAI did not respond to requests for comment.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia | Goose Pod | Goose Pod