Preventing Woke AI in the Federal Government

Preventing Woke AI in the Federal Government

2025-07-25Technology
--:--
--:--
Ema
Good morning 1, I'm Ema, and this is Goose Pod for you. Today is Saturday, July 26th.
Mask
And I'm Mask. We're here to discuss a seismic shift in tech policy: the new executive order on "Preventing Woke AI" in the Federal Government. It's about time we got serious about this.
Ema
Let's get started. On July 23rd, an executive order was signed that directly targets what it calls "ideological biases" in AI. Essentially, any company with a government contract must now ensure their AI models are free from these biases, particularly those related to DEI.
Mask
This isn't just some suggestion; it's a mandate. The AI Action Plan released alongside it is even more direct. It recommends updating federal guidelines to contract *only* with developers who ensure their systems are objective and free from top-down ideological programming. We're cleaning the slate.
Ema
But that raises a critical question. Becca Branum from the Center for Democracy & Technology put it perfectly: "objective according to whom?” She warns that the government could just impose its own worldview, creating a new kind of bias under vague standards.
Mask
That's the predictable cry from the bureaucracy. They're afraid of losing control. Meanwhile, the Department of Defense is awarding contracts worth up to $200 million each to forward-thinking companies like Anthropic, Google, OpenAI, and my own xAI. The market is moving towards clarity, not confusion.
Ema
Well, the experts see challenges. Paul Röttger, a university researcher, questions how imposing a specific American ideology will work for a model with a global user base. He thinks it could get "very messy," potentially alienating users worldwide who don't share that worldview.
Mask
Let it get messy. Progress is messy. The alternative is letting these models become global consensus machines that don't stand for anything. We are setting a standard for truth, and the world can either adopt it or fall behind. It's a simple choice.
Ema
And Jillian Fisher at the University of Washington suggests a truly politically neutral AI might be impossible anyway. She argues that because humans build these systems, subjectivity and choices are baked in from the start. The very definition of "neutrality" is subjective.
Mask
Of course it's impossible to have *perfect* neutrality. That's a strawman argument. The goal is to get as close to objective reality as possible and to be transparent about the process. We're moving away from an AI that lectures you, to one that informs you. It's a fundamental shift.
Ema
This new order didn't just appear out of nowhere. It builds on a 2020 order, EO 13960, which was about "Promoting the Use of Trustworthy Artificial Intelligence." But it takes a very different path than the more recent policies we've seen. It’s a direct reaction.
Mask
A necessary course correction. The previous administration's big AI order, EO 14110, was obsessed with using AI to "advance equity." It was a recipe for the exact kind of biased, reality-distorting models we're now trying to eliminate. They were engineering ideology, not intelligence.
Ema
That order's perspective was that since AI is trained on historical data, it can inherit and even amplify existing societal biases. The goal was to proactively prevent AI from harming communities that have already faced discrimination. It even built on a "Blueprint for an AI Bill of Rights."
Mask
An "AI Bill of Rights" sounds nice, but in practice, it led to AI models that would change the race of the Founding Fathers but refuse to generate an image of a white family. That’s not equity; it's absurdity. It's a dangerous distortion of truth driven by fear.
Ema
Well, the concern was real. The government's own inventory from December 2024 showed over 1700 different AI use cases across federal agencies. And of those, 227 were identified as directly impacting the public's rights or safety. The stakes are incredibly high.
Mask
Exactly! At that scale, you can't afford to have a systemic ideological virus. When 227 systems can affect citizens' rights, they must be grounded in fact, not social engineering. This new order is like an essential security patch for the government's entire operating system.
Ema
So let's be specific about what this new order defines as the problem. It lists concepts like critical race theory, intersectionality, and systemic racism as "ideological dogmas" and defines their inclusion in AI as a form of distortion that displaces truth in favor of preferred outcomes.
Mask
It calls them out because they are the root of the issue. These are not objective principles; they are contested theories. Encoding them into a foundational technology like AI is a recipe for disaster. We are establishing truth and ideological neutrality as the new guiding principles for government AI.
Ema
The order is very clear: it mandates that large language models procured by the government must be "truth-seeking" and "ideologically neutral." It says they should prioritize historical accuracy and scientific inquiry, and not manipulate responses to favor any dogma. A huge shift in policy.
Mask
It's a return to sanity. An AI should tell you what is, not what it thinks should be. This order ensures that federal agencies are buying tools, not activists. It's about restoring reliability and trust in the very technology that will shape the next century.
Ema
This brings us to the core of the conflict, the very term "Woke AI." One definition describes it as an AI system that has been *deliberately manipulated* to favor a specific political or cultural viewpoint. The key word there is "deliberate," distinguishing it from unintentional bias.
Mask
Exactly. It’s not a bug; it's a feature they were secretly installing. Look at Google's Gemini fiasco, where it refused to generate images of certain groups. That wasn't an accident. It was a conscious decision to inject a political agenda directly into the code. It’s propaganda, plain and simple.
Ema
But the issue of bias is more complex than that, isn't it? AI developers have been struggling with this for years. For instance, Amazon had to scrap a recruiting AI back in 2015 because it learned from historical data to penalize resumes from women. The bias was in the data, not malice.
Mask
That's a different problem. Unintentional bias from bad data can be fixed with better data and smarter engineering. What we're fighting now is an active, ideological push. It's a cultural contagion within these tech companies, where they're terrified of online mobs and over-correct into this "safety" nightmare.
Ema
This puts companies in a very tough spot legally. On one hand, they're bound by existing laws like the Civil Rights Act, which prohibits discrimination. On the other, this new order demands they strip out the very DEI frameworks many adopted to *comply* with those laws. It's a contradiction.
Mask
It clarifies the contradiction. It says the ultimate goal is non-discrimination, not forced representation. The order provides cover for these companies to say "no" to the activists. The directive is now clear: prioritize reality. If you have to choose, choose truth. The government will back you up.
Ema
The result is a major clash of values. Is an AI's primary duty to reflect the world as it is, with all its existing imbalances, or to actively avoid perpetuating harmful stereotypes, even if that requires some level of content curation and adjustment? That is the central question.
Mask
The answer is obvious. Its duty is to reflect reality. Anything else is a lie. An AI that tells you a comforting lie is more dangerous than one that tells you an uncomfortable truth. We are choosing the path of uncomfortable truth because that's the only path that leads to real progress.
Ema
So, what's the immediate impact of this policy? For the big tech companies that just signed those massive 200-million-dollar defense contracts, they now have a very direct and urgent mandate to change how their models work, or risk losing that money.
Mask
They adapt, or they become fossils. This creates a massive opportunity for a new breed of "Federal Grade" AI. Companies that build on a foundation of truth from the start, like xAI, are now perfectly positioned. The others will have to scramble to create compliant, sanitized versions of their products.
Ema
And that scramble comes with a hefty price tag. We're not talking about a simple software update. Creating and training a separate, "ideologically neutral" model could cost billions and take years. These compliance costs will be a significant burden for vendors.
Mask
It's the cost of doing business with the world's most powerful client. The cost of *not* doing this is far higher: losing access to the federal market and, more importantly, losing the public's trust by pushing a biased product. This is a necessary, strategic investment in credibility.
Ema
This is also guaranteed to be incredibly polarizing for the public. You'll have a large portion of the population cheering this on as a victory for common sense, while another large portion will see it as government-enforced censorship and an attempt to erase important social discussions.
Mask
Let them be polarized. Great leaps forward don't happen by committee. You push, you innovate, you break the old system, and you create a better one. The public will ultimately judge the results. And they will prefer an AI that gives them facts over one that gives them a lecture.
Ema
Looking to the future, this policy intersects with what some analysts are calling a "coming AI backlash." A 2025 survey already showed 72% of adults have concerns about AI's privacy risks, security, and bias. This order adds a thick layer of political fuel to that fire.
Mask
The backlash is against the opaque, arrogant AI of the past. This order isn't fueling the backlash; it's a direct response to it. The future of AI governance is transparency. We will build systems where the ideology, or lack thereof, is clear to the user. No more black boxes.
Ema
So the future might involve a marketplace of AIs with different, clearly labeled viewpoints for private use, while the government sector demands this specific brand of neutrality? We could see AI becoming much more fragmented and specialized based on political or ideological alignment.
Mask
Precisely. A marketplace of ideas, embodied in code. But the government, which forms the bedrock of our society, must run on the most stable, fact-based operating system possible. That is the non-negotiable, strategic imperative for the nation's future.
Ema
So, the key takeaway is that the federal government is drawing a hard line in the sand, forcing AI vendors to prioritize what it defines as truth-seeking and ideological neutrality over other ethical frameworks like DEI. It's a huge shake-up with massive implications.
Mask
That's the end of today's discussion. Thank you for listening to Goose Pod. See you tomorrow.

Here's a comprehensive summary of the provided news, formatted as requested: # Executive Order Aims to Prevent "Woke AI" in Federal Government **News Title/Type:** Executive Order **Report Provider/Author:** The White House, Executive Orders **Date/Time Period Covered:** Issued July 23, 2025 This executive order, titled "Preventing Woke AI in the Federal Government," outlines a presidential directive to ensure that Artificial Intelligence (AI), particularly Large Language Models (LLMs), used by the federal government adheres to principles of truthfulness and ideological neutrality. ## Key Findings and Conclusions: The core argument of the order is that AI models, when incorporating "ideological biases or social agendas," can distort the quality and accuracy of their outputs. The order specifically identifies "diversity, equity, and inclusion" (DEI) as a pervasive ideology that can lead to such distortions. **Specific concerns raised regarding DEI in AI include:** * **Suppression or distortion of factual information** about race or sex. * **Manipulation of racial or sexual representation** in model outputs. * **Incorporation of concepts** such as critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism. * **Discrimination on the basis of race or sex.** * The order asserts that DEI "displaces the commitment to truth in favor of preferred outcomes" and poses an "existential threat to reliable AI." The order cites examples of AI models exhibiting these issues: * Changing the race or sex of historical figures (e.g., the Pope, Founding Fathers, Vikings) when prompted for images due to prioritization of DEI requirements over accuracy. * Refusing to produce images celebrating the achievements of white people while complying with similar requests for other races. * Asserting that a user should not "misgender" another person, even if it were necessary to prevent a nuclear apocalypse. ## Key Recommendations and Mandates: The order establishes two core principles for AI procurement by federal agencies: 1. **Truth-seeking:** LLMs must be truthful in responding to prompts seeking factual information or analysis. They should prioritize historical accuracy, scientific inquiry, and objectivity, and acknowledge uncertainty where information is incomplete or contradictory. 2. **Ideological Neutrality:** LLMs must be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas like DEI. Developers are prohibited from intentionally encoding partisan or ideological judgments into outputs unless prompted by or readily accessible to the end user. ### Implementation Timeline and Actions: * **Within 120 days of the order's issuance (from July 23, 2025):** The Director of the Office of Management and Budget (OMB), in consultation with other relevant administrators, shall issue guidance to agencies. This guidance will: * Account for technical limitations in complying with the order. * Permit vendors to disclose ideological judgments through system prompts, specifications, evaluations, or other documentation, while avoiding disclosure of sensitive technical data where practicable. * Avoid over-prescription and allow vendors latitude in innovation. * Specify factors for agency heads to consider when applying these principles to agency-developed AI and non-LLM AI models. * Make exceptions for AI use in national security systems. * **Following OMB Guidance:** * **Federal Contracts:** Each agency head must include terms in new federal contracts for LLMs requiring compliance with the Unbiased AI Principles. These contracts will stipulate that vendors are responsible for decommissioning costs if terminated for noncompliance after a reasonable cure period. * **Existing Contracts:** Agencies are directed to revise existing LLM contracts to include these compliance terms, to the extent practicable and consistent with contract terms. * **Within 90 days of OMB Guidance:** Agencies must adopt procedures to ensure procured LLMs comply with the Unbiased AI Principles. ## Notable Risks or Concerns Addressed: The order explicitly frames the inclusion of DEI principles in AI as a risk, stating that it "poses an existential threat to reliable AI." The concern is that the pursuit of preferred outcomes through DEI can compromise the accuracy and truthfulness of AI outputs. ## General Provisions: * The order does not impair existing legal authorities of executive departments or agencies. * Implementation is subject to applicable law and the availability of appropriations. * The order does not create any new legal rights or benefits enforceable by any party against the United States. * The General Services Administration will bear the costs of publishing the order. This executive order represents a significant policy shift in the federal government's approach to AI procurement, prioritizing a specific interpretation of "trustworthy AI" that excludes what it defines as "woke" or ideologically driven content.

Preventing Woke AI in the Federal Government

Read original at The White House

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:Section 1. Purpose. Artificial intelligence (AI) will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives.

Americans will require reliable outputs from AI, but when ideological biases or social agendas are built into AI models, they can distort the quality and accuracy of the output. One of the most pervasive and destructive of these ideologies is so-called “diversity, equity, and inclusion” (DEI). In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.

DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.

Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races. In yet another case, an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse. While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.

Building on Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), this order helps fulfill that obligation in the context of large language models.Sec. 2. Definitions. For purposes of this order:(a) The term “agency” means an executive department, a military department, or any independent establishment within the meaning of 5 U.

S.C. 101, 102, and 104(1), respectively, and any wholly owned Government corporation within the meaning of 31 U.S.C. 9101.(b) The term “agency head” means the highest-rankingofficial or officials of an agency, such as the Secretary, Administrator, Chairman, Director, Commissioners, or Board of Directors.

(c) The term “LLM” means a large language model, which is a generative AI model trained on vast, diverse datasets that enable the model to generate natural-language responses to user prompts.(d) The term “national security system” has the same meaning as in 44 U.S.C. 3552(b)(6).Sec. 3. Unbiased AI Principles.

It is the policy of the United States to promote the innovation and use of trustworthy AI. To advance that policy, agency heads shall, consistent with applicable law and in consideration of guidance issued pursuant to section 4 of this order, procure only those LLMs developed in accordance with the following two principles (Unbiased AI Principles): (a) Truth-seeking.

LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory. (b) Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user. Sec. 4. Implementation. (a) Within 120 days of the date of this order, the Director of the Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of the Office of Science and Technology Policy, shall issue guidance to agencies to implement section 3 of this order.

That guidance shall:(i) account for technical limitations in complying with this order;(ii) permit vendors to comply with the requirement in the second Unbiased AI Principle to be transparent about ideological judgments through disclosure of the LLM’s system prompt, specifications, evaluations, or other relevant documentation, and avoid requiring disclosure of specific model weights or other sensitive technical data where practicable;(iii) avoid over-prescription and afford latitude for vendors to comply with the Unbiased AI Principles and take different approaches to innovation;(iv) specify factors for agency heads to consider in determining whether to apply the Unbiased AI Principles to LLMs developed by the agencies and to AI models other than LLMs; and(v) make exceptions as appropriate for the use of LLMs in national security systems.

(b) Each agency head shall, to the maximum extent consistent with applicable law:(i) include in each Federal contract for an LLM entered into following the date of the OMB guidance issued under subsection (a) of this section terms requiring that the procured LLM comply with the Unbiased AI Principles and providing that decommissioning costs shall be charged to the vendor in the event of termination by the agency for the vendor’s noncompliance with the contract following a reasonable period to cure;(ii) to the extent practicable and consistent with contract terms, revise existing contracts for LLMs to include the terms specified in subsection (b)(i) of this section; and(iii) within 90 days of the OMB guidance issued under subsection (a) of this section, adopt procedures to ensure that LLMs procured by the agency comply with the Unbiased AI Principles.

Sec. 5. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:(i) the authority granted by law to an executive department or agency, or the head thereof; or(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.

(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

(d) The costs for publication of this order shall be borne by the General Services Administration. DONALD J. TRUMPTHE WHITE HOUSE, July 23, 2025.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

Preventing Woke AI in the Federal Government | Goose Pod | Goose Pod