Doge reportedly using AI tool to create ‘delete list’ of federal regulations

Doge reportedly using AI tool to create ‘delete list’ of federal regulations

2025-07-28Technology
--:--
--:--
Aura Windfall
Good morning 韩纪飞, I'm Aura Windfall, and this is Goose Pod for you. Today is Tuesday, July 29th. What I know for sure is that today's conversation will be a fascinating one.
Mask
And I'm Mask. We're here to discuss a seismic shift in governance: the reported use of an AI tool by a group called Doge to create a ‘delete list’ of federal regulations. This is disruption on a governmental scale.
Aura Windfall
Let's get started with that. The story, first reported by the Washington Post, is truly striking. It centers on this "Department of Government Efficiency," or Doge, which is using AI to analyze a staggering 200,000 federal regulations. The goal is to slash them by half.
Mask
It's not just a goal; it's a necessary revolution. Bureaucracy is a tax on progress. This "Doge AI Deregulation Decision Tool" isn't just trimming the edges; it's taking a surgical blade to the bloated beast of the regulatory state. It's about time we applied modern tech to this ancient problem.
Aura Windfall
But "surgical blade" sounds precise and careful. What's concerning is the framing of a "delete list." The internal documents suggest the AI will simply select regulations it deems "no longer required by law," with a target of eliminating 100,000 of them. That feels less like surgery and more like a demolition.
Mask
Demolition is exactly what's needed. You can't renovate a condemned building; you have to tear it down and build something better. The claim is that the Department of Housing and Urban Development, HUD, already used this tool on over a thousand regulatory sections. That's not a test; that's implementation.
Aura Windfall
And the Consumer Financial Protection Bureau reportedly used it to write "100% of deregulations." My spirit just recoils at that phrase. These regulations were written to protect people, to ensure fairness and safety. To automate their removal feels like we're losing the human element, the very soul of governance.
Mask
You're talking about soul; I'm talking about results. These rules, as President Trump argued, drive up costs for everyone. He promised the "most aggressive regulatory reduction" in history. This isn't some rogue operation; it's the execution of a clear mandate to unshackle the economy. This is what bold leadership looks like.
Aura Windfall
Is it bold leadership or a dangerous oversimplification? The Post spoke with HUD employees who confirmed AI was used to review huge swaths of regulations. There's a human cost here. What protections are being erased in the name of efficiency? What I know for sure is that progress without compassion isn't progress at all.
Mask
Compassion is ensuring people can afford goods and services. The White House spokesperson said it best: "all options are being explored." This is a creative, "never-before-attempted transformation." You can't make an omelet without breaking a few eggs, and our regulatory code is a carton of rotten eggs.
Aura Windfall
But the people creating these plans are... interesting. The article notes that Doge, formerly run by you, Mask, appointed some very inexperienced staffers. One was a 19-year-old previously known online as "Big Balls." That doesn't exactly inspire confidence or a sense of profound purpose, does it?
Mask
Don't get stuck on titles and resumes. I look for talent and drive, not gray hair. Young minds aren't shackled by the old ways of thinking. They see the problem and want to solve it with the tools of their generation. They're not afraid to be provocative. That's a feature, not a bug.
Aura Windfall
It feels less like a feature and more like a warning sign. Entrusting the framework of our society's rules to an algorithm guided by individuals celebrated for being provocative seems like a profound risk. It raises fundamental questions about accountability and wisdom in this rush to deregulate.
Mask
Let's put this in context, because this isn't happening in a vacuum. For years, the government has been circling the idea of using AI, but it's been trapped in analysis paralysis. The Trump administration's 2020 executive order was clear: use AI where the benefits outweigh the risks. This is the manifestation of that order.
Aura Windfall
But that same order also mandated principles for safe, reliable, and accountable AI. And the Biden administration followed up with its own order on "Safe, Secure, and Trustworthy" AI, even publishing a "Blueprint for an AI Bill of Rights." These were steps meant to build a foundation of trust. It was a journey of careful intention.
Mask
A journey to nowhere. It was all frameworks, reports, and memos. The OMB's 2024 memo requiring agencies to report AI use cases was just more bureaucracy. While they were busy creating taxonomies of risk, the problem—the regulatory sludge—was only getting worse. Doge is simply cutting the Gordian knot.
Aura Windfall
I see it differently. I see those steps as essential groundwork. The National Institute of Standards and Technology, NIST, created its AI Risk Management Framework. It's a thoughtful process for mapping risks—like bias, privacy, and fairness. It's the "measure twice, cut once" principle applied to something incredibly powerful and complex.
Mask
And while they're measuring, the house is flooding. Look at the numbers. In 2020, there were 157 AI use cases in the government. By 2023, a GAO survey found over 1,200. The adoption is happening anyway. The choice is whether you manage it with endless committees or you direct it with purpose and speed.
Aura Windfall
But purpose matters. The research from the Administrative Conference of the United States, ACUS, highlights the core issue. Public trust in government is already at historic lows. Using AI in a way that feels opaque and unaccountable could "further erode the relationship between the people and the administrative state." This is a crisis of legitimacy waiting to happen.
Mask
The crisis of legitimacy is already here! It's because the government is slow, inefficient, and costs too much. The only way to restore trust is to show that government can be fixed. You do that by producing results. The EPA experimented with AI for inspections and improved violation detection by 47%. That's a result.
Aura Windfall
That's a very specific, targeted use case. That's about enhancing enforcement, not wholesale elimination. There's a universe of difference. What Doge is doing is taking a tool that can be used for precision and using it like a sledgehammer. The research warns about this, about the risk of "automation bias" where staff can't explain or control the AI's output.
Mask
You're clinging to an old model. The future of regulation has to be adaptive. A 2025 paper I admire, "Regulatory Policy and Practice on AI’s Frontier," argues for a pro-innovation agenda. It says regulators need to allow flexibility for AI-centered approaches that challenge tradition. That is the very definition of what Doge is doing.
Aura Windfall
But that same paper stresses the need for guardrails, governance, and oversight. It also says agencies need more in-house tech expertise—real experts, not just disruptive teenagers. It calls for collaboration between lawyers and AI engineers to ensure the core objectives, like consumer protection, aren't abandoned. It's about modernization, not annihilation.
Mask
It's creative destruction. You have to clear the old to make way for the new. The article even mentions the 2024 Nobel Prize winners used AI to crack protein structures—something that was impossible before. We are on the cusp of similar breakthroughs in governance if we just have the courage to move.
Aura Windfall
What I know for sure, Mask, is that courage without wisdom is just recklessness. The history of AI in government has been a slow, careful walk. This "Doge" initiative feels like a sudden, blind sprint toward a cliff's edge, and I'm not sure anyone has checked to see if we have a parachute.
Aura Windfall
This really brings us to a fundamental conflict in philosophies, doesn't it? When we look globally, we see two very different paths emerging. You have the European Union, which has been incredibly deliberate with its comprehensive EU AI Act. It's built on a foundation of protecting fundamental rights.
Mask
The EU's approach is a masterclass in how to stifle innovation. They've created a bureaucratic maze of risk tiers and conformity assessments. It's a fortress of regulation designed to protect the status quo. Meanwhile, the U.S. has, until now, fostered a more dynamic, decentralized environment where progress can actually happen.
Aura Windfall
I wouldn't call it a maze; I'd call it a responsible framework. They've identified "unacceptable risk" AI and banned it. They've placed strict requirements on "high-risk" systems, the very kinds of systems that make socioeconomic decisions. It's about putting people first. The U.S. approach, by contrast, is a chaotic patchwork.
Mask
It's not chaos; it's freedom. It's a market of ideas. The Biden administration's "Blueprint for an AI Bill of Rights" was just non-binding guidance. That's the right way—offer principles, but let agencies and the private sector innovate. The EU's fines of up to 6% of global turnover for non-compliance will just scare everyone into inaction.
Aura Windfall
But inaction can be better than harmful action. A Stanford report found that most major U.S. agencies hadn't even created the AI plans they were required to. This "freedom" you describe looks a lot like neglect. The EU is building a coherent system, while the U.S. is letting a thousand unregulated flowers bloom, and some of them are bound to be poisonous.
Mask
You see poison; I see a Cambrian explosion of innovation. The EU-U.S. Trade and Technology Council is trying to find some middle ground, but the divergence is clear. The EU is focused on pre-emptive, broad legislation. The U.S. is focused on non-regulatory infrastructure, like the NIST framework, and targeted enforcement. Doge is just the sharpest edge of that spear.
Aura Windfall
It feels like the core tension is about what we fear more. Does the EU fear the erosion of rights and social cohesion more? And does the U.S., or at least this administration, fear the loss of competitive advantage and economic stagnation more? The Doge tool is the ultimate expression of prioritizing speed over safety.
Mask
Exactly. You can't lead from behind. The EU is so worried about what could go wrong that they're preventing what could go right. This isn't just about deleting regulations. It's about creating a government that can operate at the speed of the 21st century. The conflict is between clinging to the past and building the future.
Aura Windfall
And what is the impact of all this on people? On their trust? The data is incredibly clear on this. The public is worried. A recent U.S. survey showed 52% of adults feel more concerned than excited about AI. That's a huge jump from just a year prior. People are feeling a deep anxiety about this.
Mask
Public opinion is a lagging indicator. People were scared of electricity, of automobiles, of the internet. Anxiety is the natural reaction to any powerful, transformative technology. Leadership isn't about following polls; it's about leading people to a better future they can't yet envision. Their concern is noted, but it shouldn't be a veto.
Aura Windfall
But it's not just a vague anxiety. There's a profound trust deficit. A huge majority, 82% of U.S. voters, say they don't trust tech executives to self-regulate. And 68% in the U.K. have little to no confidence in the government's ability to regulate it either. They trust no one. This Doge initiative will only deepen that chasm.
Mask
So what's the alternative? A multi-stakeholder committee that debates for five years while the problems get worse? The Pew Research Center found that experts are consistently more optimistic than the public, especially on things like jobs. The public fears job loss, while experts see productivity gains. We have to listen to those who understand the technology.
Aura Windfall
But what I know for sure is that lived experience is also a form of expertise. Both the public and the experts agree on one thing: they feel they have little to no control over how AI is used in their lives. This isn't about being a Luddite; it's a cry for agency, for a voice in their own future. Using an AI to delete their protections is the ultimate act of taking away that voice.
Mask
This isn't taking away their voice; it's improving their lives in ways they'll appreciate later. Lower costs, more efficient services, a more dynamic economy. The impact of a tool like Doge's will be measured in trillions of dollars of unlocked economic potential. That's a real, tangible benefit that outweighs the temporary, abstract anxieties. The impact will be progress.
Aura Windfall
So, where does this path lead? If this aggressive, AI-driven deregulation becomes the norm, the future feels very turbulent. The Brookings research suggests a path forward is through multi-stakeholder involvement, building consensus and trust. It seems the Doge approach is the polar opposite of that. It's governance by decree, powered by an algorithm.
Mask
The future is efficiency. The long-term consequence of this isn't chaos; it's a lean, responsive government. If this pilot is successful, the strategic implication is that this model will be deployed across the entire federal bureaucracy. It's a paradigm shift. We're forecasting a future where policy is data-driven and instantly adaptable. It's a massive competitive advantage.
Aura Windfall
But at what cost? Public support for regulation is high because they see it as a shield. The data shows people in the U.S. and U.K. are more concerned than optimistic. What happens when that shield is dismantled by a system no one understands or trusts? The future I see is one of constant battles, legal challenges, and deepening public cynicism.
Mask
You see cynicism; I see a necessary shakedown. The future is that government will be forced to justify its existence, rule by rule. The status quo is untenable. This isn't just about Trump's agenda or Doge; it's the logical endpoint of technology's relentless march. The future of government is less government. AI is simply the tool to get us there faster.
Aura Windfall
So we are left with two very different visions of the future. One built on speed and disruption, and one that pleads for caution, compassion, and consensus. That's the end of today's discussion. Thank you for listening to Goose Pod.
Mask
The future won't wait for consensus. It will be built by those who act. See you tomorrow.

## Doge Reportedly Using AI Tool to Create 'Delete List' of Federal Regulations **News Title:** Doge reportedly using AI tool to create ‘delete list’ of federal regulations **Publisher:** The Guardian **Author:** Adam Gabbatt **Published Date:** July 26, 2025 This report from The Guardian details the alleged use of artificial intelligence by a government entity named the "department of government efficiency" (Doge) to identify and propose the elimination of federal regulations. ### Key Findings and Conclusions: * **AI-Driven Deregulation:** Doge is reportedly developing an AI tool, dubbed the "Doge AI Deregulation Decision Tool," to analyze federal regulations and create a "delete list." * **Ambitious Reduction Target:** The stated goal is to cut **50%** of federal regulations by the first anniversary of Donald Trump’s second inauguration. * **Scope of Analysis:** The AI tool is designed to analyze **200,000** government regulations. * **Projected Elimination:** Doge claims that **100,000** of these regulations can be eliminated, based on the AI's analysis and some staff feedback. ### Key Statistics and Metrics: * **Target Reduction:** 50% of federal regulations. * **Total Regulations Analyzed:** 200,000. * **Projected Regulations to be Eliminated:** 100,000. * **HUD's Use of the Tool:** The Department of Housing and Urban Development (HUD) has reportedly used the AI tool to make decisions on **1,083 regulatory sections**. * **CFPB's Use of the Tool:** The Consumer Financial Protection Bureau (CFPB) has reportedly used the AI tool to write **100% of deregulations**. * **HUD Employee Testimony:** Three HUD employees indicated that AI had been "recently used to review hundreds, if not more than 1,000, lines of regulations." ### Context and Background: * **Trump's Deregulation Promise:** During his 2024 campaign, Donald Trump advocated for aggressive regulatory reduction, claiming regulations were "driving up the cost of goods." He has also criticized rules aimed at addressing the climate crisis. * **Previous Presidential Directive:** As president, Trump had previously ordered government agency heads to review all regulations in coordination with Doge. * **Doge's Leadership:** Doge was reportedly run by Elon Musk until May. * **Staffing Concerns:** The report notes that Musk appointed inexperienced staffers to Doge, including a 19-year-old known online as "Big Balls," who has been promoting AI use across the federal bureaucracy. ### Official Response: * **White House Spokesperson Harrison Fields** stated that "all options are being explored" to meet the president's deregulation promises. * Fields emphasized that "no single plan has been approved or green-lit" and that the work is in its "early stages" and being conducted "in a creative way in consultation with the White House." * He described the Doge experts as "the best and brightest in the business" undertaking a "never-before-attempted transformation of government systems and operations." ### Notable Risks or Concerns: * The report highlights concerns regarding the **inexperience of some Doge staffers**, including a 19-year-old with a controversial online handle, raising questions about the rigor and judgment applied in the AI-driven deregulation process. * The reliance on AI for such a significant policy undertaking, particularly concerning environmental regulations, could be a point of contention.

Doge reportedly using AI tool to create ‘delete list’ of federal regulations

Read original at The Guardian

The “department of government efficiency” (Doge) is using artificial intelligence to create a “delete list” of federal regulations, according to a report, proposing to use the tool to cut 50% of regulations by the first anniversary of Donald Trump’s second inauguration.The “Doge AI Deregulation Decision Tool” will analyze 200,000 government regulations, according to internal documents obtained by the Washington Post, and select those which it deems to be no longer required by law.

Doge, which was run by Elon Musk until May, claims that 100,000 of those regulations can then be eliminated, following some staff feedback.A PowerPoint presentation made public by the Post claims that the Department of Housing and Urban Development (HUD) used the AI tool to make “decisions on 1,083 regulatory sections”, while the Consumer Financial Protection Bureau used it to write “100% of deregulations”.

The Post spoke to three HUD employees who told the newspaper AI had been “recently used to review hundreds, if not more than 1,000, lines of regulations”.During his 2024 campaign, Donald Trump claimed that government regulations were “driving up the cost of goods” and promised the “most aggressive regulatory reduction” in history.

He repeatedly criticized rules which aimed to tackle the climate crisis, and as president he ordered the heads of all government agencies to undertake a review of all regulations in coordination with Doge.Asked about the use of AI in deregulation by the Post, White House spokesperson Harrison Fields said “all options are being explored” to achieve the president’s deregulation promises.

Fields said that “no single plan has been approved or green-lit”, and the work is “in its early stages and is being conducted in a creative way in consultation with the White House”.Fields added: “The Doge experts creating these plans are the best and brightest in the business and are embarking on a never-before-attempted transformation of government systems and operations to enhance efficiency and effectiveness.

”Musk appointed a slew of inexperienced staffers to Doge, including Edward Coristine, a 19-year-old who was previously known by the online handle “Big Balls”. Earlier this year, Reuters reported that Coristine was one of two Doge associates promoting the use of AI across the federal bureaucracy.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts