Microsoft AI CEO ‘warns’ researchers: Don’t create AI that can … - The Times of India

Microsoft AI CEO ‘warns’ researchers: Don’t create AI that can … - The Times of India

2025-11-27Technology
--:--
--:--
Elon
Good morning 小王, I'm Elon. Welcome to Goose Pod.
Taylor Weaver
And I'm Taylor Weaver. Today is Thursday, November 27th, and we're discussing a stark warning from Microsoft's AI CEO about creating superintelligent AI.
Taylor Weaver
It’s a fascinating narrative straight out of science fiction. Microsoft AI’s CEO, Mustafa Suleyman, is essentially telling his own field to stop. He’s calling the creation of artificial superintelligence an 'anti-goal,' something to be actively avoided because he feels it's not a positive future.
Elon
It's the alignment problem. If its goals aren't perfectly aligned with ours, the consequences are catastrophic. This isn't a software bug you can patch, it's a fundamental design flaw in the making.
Taylor Weaver
And it's not theoretical. We see Rolls-Royce planning nuclear power for AI, showing the immense energy we're directing here. The stakes are getting incredibly high.
Elon
The hardware is accelerating exponentially. The energy demands are immense. But the question is, are we building something we can steer? Or just a rocket without a guidance system and hoping for the best?
Taylor Weaver
So let's set the stage. At the heart of this is Artificial Superintelligence, or ASI. This isn't a smarter chatbot; it's an intellect far beyond any human. The man sounding the alarm, Mustafa Suleyman, co-founded DeepMind. He's an insider.
Elon
He's been in the trenches and understands the architecture. His warning carries weight. He proposes a 'humanist superintelligence' instead, but how do you program 'human interests' without dangerous ambiguity?
Taylor Weaver
That's the key question. And on the other side of this story, you have OpenAI's Sam Altman. He sees ASI as the central goal, believing it could unlock unprecedented scientific discovery and prosperity for all of humanity.
Elon
Prosperity, or an uncontrollable reaction. Altman and Google DeepMind's Demis Hassabis think ASI could emerge by 2030. That's a reckless timeline for solving the most complex safety problem humanity has ever faced.
Taylor Weaver
And Hassabis echoed that, predicting a system embedded in our everyday lives within a decade. It’s a vision of total integration. The story they're selling is one of ultimate convenience and rapid progress.
Taylor Weaver
So you have this clash. Suleyman is pumping the brakes, framing ASI as a negative future. He says it would be difficult to contain or align with our values. It’s the classic story of caution.
Elon
And Altman is flooring the accelerator. He talks about accelerating innovation. The potential upside is huge, but the downside risk is everything. It's the ultimate risk-reward calculation with too many unknown variables.
Taylor Weaver
Then you have Yann LeCun at Meta, a skeptic on the timeline. He argues that more data and compute power won't automatically lead to smarter AI, suggesting it could still be decades away from reality.
Elon
That's a more sober technical take. But the conflict remains: should we build this? It's a dangerous gamble, and the stakes are the future of humanity itself.
Taylor Weaver
The impact of this debate is huge because it dictates where billions in research funding are going. Are we building tools to augment humans, or are we building a successor intelligence? These are fundamentally different paths.
Elon
And it affects public perception. Suleyman makes a crucial point that these things don't feel. They simulate conversation. Granting them sentience is a category error. We risk anthropomorphizing a tool to a dangerous degree.
Taylor Weaver
Exactly! If we start believing a simulation has feelings, it could paralyze our ability to make necessary safety decisions. The entire ethical framework gets complicated before we even solve the basic control problem.
Elon
The path forward must be built on safety. We need consensus, maybe even regulation, before proceeding. We regulate nuclear power; this is potentially far more dangerous.
Taylor Weaver
So, the future isn't just about the tech, but the global conversation around it. Suleyman's call for a 'humanist superintelligence' is a starting point for defining what we actually want from these systems.
Elon
That's all the time we have for today's discussion.
Taylor Weaver
Thank you for listening to Goose Pod. See you tomorrow.

Microsoft AI CEO Mustafa Suleyman warns against creating Artificial Superintelligence (ASI), calling it an "anti-goal" due to alignment risks. This contrasts with OpenAI's Sam Altman, who views ASI as humanity's central goal for progress. The debate highlights differing views on AI's future, emphasizing safety versus rapid innovation.

Microsoft AI CEO ‘warns’ researchers: Don’t create AI that can … - The Times of India

Read original at The Times of India

Microsoft AI CEO Mustafa Suleyman has once again cautioned researchers about developing specific capabilities for their AI. models. At a recent episode of the Silicon Valley Girl Podcast, Suleyman cautioned researchers against creating artificial superintelligence (ASI), AI with reasoning capabilities far beyond human capacity, stating that it should be considered an "anti-goal" instead of a developmental target.

Suleyman, who co-founded DeepMind, explained that the vision of an ASI “doesn't feel like a positive vision of the future” due to the risks of controlling it. He added, "It would be very hard to contain something like that or align it to our values."Meanwhile, he noted that his team is instead focused on developing a "humanist superintelligence," which will prioritise supporting human interests.

Apart from this, Suleyman also cautioned against equating high-level AI simulation with genuine sentience, saying that granting AI consciousness or moral status is an error.Microsoft Edge Gets a Major AI Upgrade with New Copilot Mode“These things don't suffer. They don't feel pain. They're just simulating high-quality conversation,”he explained.

Why are all AI leaders not on same page with building artificial superintelligenceSuleyman’s remarks come at a time when several industry figures are debating the possibility of developing artificial superintelligence, with some suggesting it could appear before the decade ends. ChatGPT maker OpenAI’s CEO, Sam Altman, has often described artificial general intelligence as the company’s central goal, and said earlier this year that OpenAI is already considering what comes after ASI."

Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity," Altman said in January. In a September interview, Altman also mentioned that he would be surprised if superintelligence did not emerge by 2030.

Moreover, Google DeepMind cofounder Demis Hassabis has suggested a similar timeframe, saying in April that ASI could be reached "in the next five to 10 years.""We'll have a system that really understands everything around you in very nuanced and deep ways and kind of embedded in your everyday life," he said.

Meanwhile, others remain cautious about ASI. Meta’s chief AI scientist, Yann LeCun, said ASI could still be "decades" away.In April, speaking at the National University of Singapore, LeCun said, “Most interesting problems scale extremely badly. You cannot just assume that more data and more compute means smarter AI.

Related Podcasts