The government wants AI to fight wars and review your taxes

The government wants AI to fight wars and review your taxes

2025-07-17Technology
--:--
--:--
纪飞
Good evening 老张, I'm 纪飞, and this is Goose Pod for you. Today is Thursday, July 17th. The time is 22:19. Tonight, we're diving into a topic that sounds like it's straight out of a science fiction movie, but it's happening right now.
国荣
And I'm 国荣. It’s great to be here. The topic is indeed a big one: The government wants AI to fight wars and review your taxes. We're going to break down what this really means, from the battlefield to your wallet.
纪飞
Let's get started. The push for AI in the U.S. government isn't just a concept; it's a reality, and it’s accelerating. We're seeing AI projects in nearly every corner of the executive branch, driven by the idea that AI can handle many jobs better than humans.
国荣
It’s a pretty bold claim. It reminds me of when self-checkout lanes were introduced in supermarkets. The promise was speed and efficiency, but sometimes you just end up needing a human to help with the bagging or a pricing error. Is that what we're seeing here?
纪飞
In a way, yes, but on a much grander scale. Take the Pentagon, for instance. They have a core AI program called NGA Maven, which was launched in 2017. Since January of this year, the number of military and civilian personnel using it has more than doubled to over 25,000 users.
国荣
Wow, doubled in just a few months! That’s a huge jump. So what exactly does this Maven system do? Is it like a super-smart assistant for soldiers, or is it actually making life-or-death decisions on its own? It sounds a little like something from a movie.
纪飞
Primarily, it processes imagery from sources like drones and satellites to identify potential targets for humans to assess. But it’s expanding. The goal is for it to interpret audio and text too, creating a "live map" of military operations to help distinguish between combatants and noncombatants.
国荣
A "live map" that tells you who's who on the battlefield. That sounds incredibly complex. The idea of an AI making 1,000 accurate decisions about potential targets in an hour is both impressive and, frankly, a bit unsettling. The potential for error seems enormous.
纪飞
The stakes are certainly high. And it's not just in warfare. The Federal Aviation Administration, the FAA, is testing AI to assist air traffic controllers. The idea is to reduce fatigue and distraction by having AI handle repetitive, data-heavy tasks, while humans still communicate with pilots.
国荣
That seems a bit more reasonable. It’s like having a co-pilot who never gets tired or bored. But the article mentions this is partly due to staff shortages. Are they planning to use AI to replace people, or just to help the overworked staff they currently have?
纪飞
The person with knowledge of the plans said the agency is "planning for less people." So, workforce reduction is a clear goal. This trend extends to our airports as well. The TSA has already rolled out facial recognition cameras in over 200 airports nationwide to check IDs.
国荣
Oh, I’ve seen those! You walk up, look at a screen, and it matches your face to your ID. The TSA claims it's over 99 percent accurate, which sounds great. But I remember reading studies that facial recognition can be less accurate for people of color.
纪飞
That has been a major point of contention for years. Despite those concerns, the program is expanding. They're also experimenting with automated kiosks for pre-checked passengers that would require "minimal to no assistance" from TSA officers. The goal is to speed things up, but also to reduce manpower.
国荣
So, fewer human officers and more cameras and kiosks. It’s a trade-off. You might get through the line faster, but you’re also putting a lot of trust in the accuracy and fairness of these automated systems. What about something more... bureaucratic? Like taxes?
纪飞
Absolutely. The IRS is a prime target. They're already using AI to help employees query their internal manuals, but now they're looking to offload more significant tasks. A person familiar with the matter said the agency is examining the feasibility of deploying AI to manage tax audits.
国荣
An AI tax auditor? That’s a fascinating and slightly terrifying thought. I can just imagine getting a letter from an algorithm demanding to see my receipts. It removes the human element entirely, which could be good or very, very bad depending on how well it works.
纪飞
To understand how we got here, we need to look at the shift in policy. In 2023, President Biden signed an executive order to encourage government AI use while also trying to contain its risks. It was about creating guardrails. However, in January, President Trump repealed that order.
国荣
So, one administration was putting up fences, and the next one is taking them down to let the horses run free? That sounds like a dramatic shift. What was the thinking behind removing those guardrails? Was it just about moving faster?
纪飞
Precisely. The new administration's philosophy is centered on speed and American dominance in AI. A White House spokeswoman, Anna Kelly, stated, "President Trump has long stressed the importance of American AI dominance, and his administration is using every possible tool to streamline our government and deliver more efficient results."
国荣
"American AI dominance." That’s a strong phrase. It frames this as a global competition, almost like an arms race, but with algorithms instead of missiles. It explains the urgency, but it doesn't do much to soothe the nerves of people worried about the risks of moving too fast.
纪飞
The influence of tech leaders like Elon Musk is also a key factor. Musk has publicly stated that AI can do a better job than federal employees at many tasks. This idea is now being actively tested across the government, partly inspired by his U.S. DOGE Service, which cut thousands of government jobs.
国荣
Ah, the DOGE Service. It's this idea of running the government like a lean, aggressive tech startup. Cut the "bloat," automate everything, and disrupt the old way of doing things. It sounds great on a PowerPoint slide, but government isn't a tech company. Its "customers" are citizens who can't just switch to a competitor.
纪飞
That’s the core of the debate. And it’s not a new one. Jennifer Pahlka, who worked in the Obama administration, said, "In government, you have so much that needs doing and AI can help get it done and get it done faster." There’s a genuine belief in its potential for good.
国荣
I can see that. I mean, who wouldn't want government services to be faster and more efficient? No one enjoys waiting in long lines or dealing with bureaucratic red tape. If AI can fix that, it's a huge win for everyone. But the "how" is just as important as the "what."
纪飞
Exactly. On the other side, you have people like Sahil Lavingia, a former DOGE staffer. He pushed the Department of Veterans Affairs to use AI to find wasteful spending and argues that government should aggressively deploy the technology. He believes no task should be off-limits for experimentation.
国荣
No task off-limits? That's where it gets a little more radical. He even said "especially in war," and then added, "I don’t trust humans with life and death tasks." That’s a pretty extreme view. It suggests a future where human judgment is seen as a liability, not an asset.
纪飞
It's a maximalist view, as the article puts it, shared by some of the tech-focused people in the administration. This philosophy is what's driving the Pentagon to give tech companies like Palantir a larger role in American military power. They’ve more than doubled spending on a Palantir system called Maven Smart System.
国荣
So, we have a policy change that removed safety rails, a philosophical belief from tech leaders that AI is superior to humans, and a whole lot of money being poured into private tech companies to make it all happen. It’s a perfect storm for rapid, widespread automation.
纪飞
And it’s happening across the board. The U.S. Patent and Trademark Office, for example, is launching a pilot program where an AI will review patent applications and email applicants a list of the 10 most similar existing patents, hoping to get them to revise or withdraw.
国荣
That actually sounds pretty useful. It could save a lot of time for both the applicant and the examiner. But then the article says it will become "mandatory" for examiners to use an AI search tool starting July 21st. That's a subtle but important shift from a helpful tool to a required one.
纪飞
It is. And some staff there are concerned because they feel the rollout moved so quickly that even some top leaders didn't fully understand what was happening. This highlights the speed-over-caution approach. They are worried that the next step is having AI write the actual patent examination reports.
国荣
It’s like that old saying, "If you give a mouse a cookie..." First, the AI is a helpful search tool. Next, it's writing reports. Before you know it, it's making the final decision on whether a groundbreaking invention gets a patent or not. The line between assisting humans and replacing them is very thin.
纪飞
This pattern is visible at the Department of Veterans Affairs as well. The VA has been one of the most active agencies, deploying hundreds of AI uses last year. The top technology official there, Charles Worthington, interpreted the new policy with a clear message to his team.
国荣
Let me guess: "Full speed ahead"? It seems to be the common theme here. What did he say? I’m imagining an email with a lot of exclamation points and rocket ship emojis, but the reality is probably more... corporate.
纪飞
His email, which was made public, said: "The message is clear to me. Be aggressive in seizing AI opportunity, while implementing common sense safeguards to ensure these tools are trustworthy when they are used in VA’s most sensitive areas such as benefit determinations and health care."
国荣
"Be aggressive" but use "common sense safeguards." That sounds good, but "common sense" can mean very different things to different people, especially when you're being told to be aggressive. It feels like telling a race car driver to go as fast as possible, but also to be careful. One of those instructions usually wins out.
纪飞
And that’s the tension. Under the Biden administration, these sensitive programs were labeled as "safety impacting" or "rights impacting." The new administration has discontinued those labels. Now, they'll just be denoted as "high-impact." It’s a subtle but significant change in rhetoric.
国荣
It absolutely is. "Rights impacting" forces you to think about the person on the other end of the algorithm. It makes you consider fairness, due process, and human dignity. "High-impact" is a much more neutral, almost corporate term. It feels more focused on the system's performance than on its effect on people's lives.
纪飞
This change in language and policy really sets the stage for the conflicts we're now seeing. The administration's focus is on streamlining government and achieving what it calls "American AI dominance." They believe this will ultimately lower costs and reduce wait times for taxpayers.
国荣
And who doesn't want that? The promise is very appealing. It's the "move fast and break things" philosophy applied to government. But when the "things" you might break are public safety, fair tax audits, or a veteran's access to healthcare, the stakes are infinitely higher than just a buggy app.
纪飞
Which brings us directly to the central conflict. On one side, you have this powerful push for efficiency and automation, championed by the administration and tech proponents. On the other, you have government watchdogs and even some federal workers who are sounding the alarm.
纪飞
The central conflict here is a classic one: speed versus safety. The administration and its supporters see AI as a powerful tool for efficiency. But tech watchdogs are worried this automation drive, combined with layoffs, will give unproven technology an outsized role in making critical decisions.
国荣
It's the classic "what could possibly go wrong?" scenario. It's one thing to automate a car factory, but it's another thing entirely to automate decisions about people's benefits or safety. A glitch in a factory might mean a car door is misaligned. A glitch in a government AI could be catastrophic.
纪飞
Elizabeth Laird from the Center for Democracy and Technology put it perfectly. She said that if AI *drives* federal decision-making instead of *aiding* human experts, glitches could unfairly deprive people of benefits or harm public safety. She sees "a fundamental mismatch" between what AI can do and what we expect from government.
国荣
That’s a great point. We expect government decisions, especially life-altering ones, to have a degree of nuance, compassion, and common sense that algorithms just don't have. An AI can't understand a person's unique circumstances or make a judgment call based on empathy. It just follows the code.
纪飞
And yet, the push to delegate these functions is strong. At the IRS, there's an effort being overseen by a DOGE official, Sam Corcos, to deploy AI more broadly. An internal source expressed worries about the lack of oversight, stating the "end game is to have one IT, HR, etc., for Treasury and get AI to do everything."
国荣
Get AI to do *everything*? That’s a chilling thought. It sounds less like modernizing government and more like hollowing it out, replacing public servants with software. You lose all that institutional knowledge and human experience. What happens when the AI gets it wrong? Who do you even appeal to? The software developer?
纪飞
That accountability question is huge. The Treasury Department's official statement is that they are simply implementing a "fulsome IRS modernization plan that taxpayers have deserved for over three decades." They frame it as a long-overdue upgrade, not a radical replacement of human experts.
国荣
Of course, that’s the official line. "Modernization" is a much friendlier word than "automation" or "layoffs." But it seems like there’s a real conflict between the people building these systems and the people who have to live with their decisions. There's a big difference between a helpful tool and an unquestionable authority.
纪飞
This tension is also clear at the FAA. They see AI as a potential tool to address safety concerns and staff shortages. The official statement emphasizes that "humans will remain in charge" and that their experts are "essential." But an anonymous source says the plans explicitly include "planning for less people."
国荣
So they’re speaking out of both sides of their mouth. Publicly, it's "AI is just a helper!" but internally, it's "How can we use AI to cut staff?" It's a conflict between public reassurance and internal cost-cutting goals. It makes you wonder which goal will ultimately win out.
纪飞
The proponents argue that we should be more worried about human error. Remember Sahil Lavingia, the former DOGE staffer? He said, "I don’t trust humans with life and death tasks." He believes that for critical functions, especially in war, a well-designed AI could be more reliable and less prone to mistakes than a person.
国荣
I understand the logic, but it’s a very cold and calculating way to see the world. It assumes the AI is perfectly designed, perfectly coded, and free of any of the biases of its human creators. We know from experience that’s rarely, if ever, the case. AI can be just as flawed as people, but in ways that are harder to see and correct.
纪飞
This is exactly the fear at the Department of Veterans Affairs. They use an AI algorithm called REACH VET to predict which veterans are at the highest risk of suicide and prioritize them for mental health assistance. It’s a noble goal, but an investigation found the system was biased.
国荣
Biased how? Let me guess, it was trained on data that wasn't representative of all veterans? This is a classic AI problem. You build a system to help everyone, but if your data is skewed, you end up helping some people more than others, and can even harm those you neglect.
纪飞
Precisely. The investigation found the system prioritized help to White men, especially those who had been divorced or widowed, because historical data showed them to be at the highest risk. As a result, it was less likely to flag women struggling with thoughts of suicide for assistance.
国荣
That is a devastating failure. It shows the danger of relying on "unproven technology" for life-and-death decisions. The AI wasn't being malicious; it was just reflecting the biases in the data it was given. It didn't account for risk factors specific to female veterans, like military sexual trauma.
纪飞
The VA has since updated the algorithm to account for those factors. But it’s a stark example of the conflict between the promise of AI and the reality of its implementation. The goal was to save lives, but for a time, the tool was actively failing an entire demographic of at-risk veterans.
国荣
It really crystallizes the whole debate. Proponents will point to the 117,000 at-risk veterans the system *did* identify. Critics will point to the unknown number of female veterans it missed. Both are right. The conflict is about whether the potential benefits outweigh the very real and very human costs of its flaws.
纪飞
And there's a conflict in the private sector's role. The administration is pushing to rely more on commercial technology from companies like Palantir and xAI. This gives tech firms a larger role in military power and government functions, blurring the line between public service and private profit.
国荣
That’s a huge point. When a private company is providing the AI for tax audits or target identification, who are they ultimately accountable to? Their shareholders or the American people? Their goals—like securing the next big government contract—might not always align perfectly with the public's best interest. It adds another layer of conflict.
纪飞
The impact of this push is already being felt. The most direct impact is on the federal workforce itself. The clear goal, as seen with the FAA and TSA, is to shrink the number of government employees by automating their tasks. This creates an atmosphere of uncertainty and anxiety for many public servants.
国荣
It's a huge morale issue. If you're a patent examiner or an IRS agent and you hear that your bosses are actively testing an AI to do your job, it's hard to stay motivated. It also raises the question: what happens to all that human expertise when it's replaced by an algorithm? It just... disappears.
纪飞
That expertise is a critical asset. An experienced air traffic controller or tax auditor has years of nuanced judgment that can't be easily coded. The impact of losing that is subtle but significant. It could lead to a government that is more brittle, less able to handle unexpected situations that don't fit the AI's programming.
国荣
And what about the impact on citizens? We talked about the REACH VET program. The impact there was that female veterans were not getting the help they desperately needed. That’s not just a statistic; that has real, tragic consequences for individuals and their families. It shows how algorithmic bias can cause direct harm.
纪飞
The VA did acknowledge the flaw and says it has updated the algorithm with new risk factors specific to women, like military sexual trauma and infertility. Since 2017, they say the program has helped identify over 117,000 at-risk veterans. But the initial failure has had a lasting impact on trust.
国荣
It’s like a doctor who misdiagnoses you. Even if they correct the mistake later, it shakes your confidence in their judgment. The impact on public trust is huge. If people believe the government's AI systems are biased or unfair, they'll be less likely to engage with them, which could undermine the whole system.
纪飞
Another impact is on safety and security. With the TSA's use of facial recognition, a federal report found it to be over 99% accurate. However, external studies have consistently shown these technologies are less accurate for people of color and women. This creates a potential for innocent people to be misidentified.
国荣
Exactly. A 1% error rate sounds low, but when you're screening millions of people a day, that's tens of thousands of potential false matches. For the person who gets flagged, the impact isn't a small statistical error; it's a stressful, potentially humiliating experience of being treated like a suspect.
纪飞
The administration's response, as a former DHS official noted, was that these tools were originally meant to *help* officers be more efficient, not replace them. The idea was to free them up to interact more with passengers. But under the new push, contractors believe the goal is now to reduce manpower.
国荣
So the impact is a shift from augmentation to automation. Instead of a human with an AI assistant, you get just the AI. This changes the nature of the service. It becomes less about human interaction and security, and more about processing people as quickly as possible, like products on a conveyor belt.
纪飞
This has a direct societal impact. Elizabeth Laird of the Center for Democracy and Technology warns of a "fundamental mismatch" between what AI can do and what citizens expect from their government. We expect fairness, recourse, and a human touch, which these systems are not designed to provide.
国荣
That’s the perfect way to put it. There’s an expectation gap. We're being given a vending machine when what we need is a conversation. The impact is a more impersonal, less responsive government that might be efficient on paper but feels alienating and unjust to the people it's supposed to serve.
纪飞
Looking to the future, the administration is expected to release a comprehensive White House AI plan this month. This will likely accelerate all the trends we've been discussing. We can expect a stronger push for AI to take on even more central decision-making roles across all agencies.
国荣
So, what does that future look like? If AI is making more decisions, does that mean a smaller federal workforce is inevitable? It seems like the logical conclusion of this path is a government run by a skeleton crew of human overseers and a vast network of interconnected AI systems.
纪飞
That appears to be the long-term vision for some. The goal of centralizing Treasury's IT and HR and having AI "do everything" is a clear indicator. The future could involve AI not just reviewing tax audits, but conducting them, and not just identifying military targets, but perhaps even approving strikes.
国荣
That's a scary thought. It raises fundamental questions about what we want our government to be. Is it a service provider, accountable to its citizens, or is it a highly efficient, automated machine that optimizes for cost and speed above all else? The choices we make now will decide that future.
纪飞
The key thing for listeners to understand is that this isn't science fiction. The decisions about the role of AI in our government are being made right now. The future will be shaped by how much we, as citizens, pay attention and demand transparency and accountability for these powerful new systems.
国荣
It’s about finding the right balance. AI can be an amazing tool. It can help find cures for diseases, make our lives more convenient, and yes, even make government more efficient. But a tool is only as good as the hand that wields it. We need to ensure it's used to help people, not just to replace them.
纪飞
That's the end of today's discussion. We've seen that the government's push for AI is a complex issue, with the promise of efficiency set against significant risks of bias, job displacement, and a loss of human oversight. The debate is happening now, and its outcome will shape our future.
国荣
From the Pentagon's "live maps" to AI tax auditors, this technology is poised to change our relationship with the government in profound ways. It's a conversation that requires not just technical expertise, but a deep understanding of what we value as a society. Thank you for listening to Goose Pod.
纪飞
We hope it gave you a lot to think about, 老张. We'll be back tomorrow with another deep dive into the topics that matter. See you tomorrow.

## Government Embraces AI for Efficiency and Automation, Sparking Debate on Risks This report from **The Washington Post**, published on **July 15, 2025**, details the Trump administration's aggressive push to integrate Artificial Intelligence (AI) across various federal agencies, a strategy influenced by Elon Musk's vision of AI surpassing human capabilities in government tasks. The initiative aims to streamline operations, reduce costs, and enhance efficiency, but raises significant concerns among government watchdogs regarding the potential for unproven technology to make critical decisions and the impact on the federal workforce. ### Key Findings and Initiatives: * **Broad Agency Adoption:** AI is being explored and implemented across nearly every executive branch agency, including the Pentagon, Federal Aviation Administration (FAA), Internal Revenue Service (IRS), U.S. Patent and Trademark Office (USPTO), and the Transportation Security Administration (TSA). * **Elon Musk's Influence:** Elon Musk's ideas about AI's potential to outperform federal employees are a driving force. His startup, xAI, is offering its chatbot Grok for use by Pentagon personnel. * **Workforce Reduction Goal:** A significant aim of these AI programs is to shrink the federal workforce, mirroring the approach of Musk's "U.S. DOGE Service." * **Efficiency and Cost Savings:** The promised benefits include reduced wait times and lower costs for taxpayers. * **Policy Shift:** President Trump repealed President Biden's 2023 executive order on AI, removing "guardrails" and accelerating AI rollout. A comprehensive White House AI plan is anticipated this month. ### Agency-Specific AI Deployments and Plans: * **Pentagon:** * **NGA Maven:** This core AI program, launched in 2017, has seen its user base **more than double** since January, with over **25,000 U.S. military and civilian personnel** now using it globally. * **Capabilities:** NGA Maven processes imagery from satellites, drones, and other sources to identify potential targets. It is being expanded to interpret audio and text, aiming to create a "live map" of operations and enable **1,000 accurate decisions about potential targets within an hour**. * **Maven Smart System:** Spending on this component, provided by Palantir, has been **more than doubled**, with an additional **$795 million** allocated. It analyzes sensor data to assist in target identification and strike approval, and has been used for logistics planning. * **Commercial Technology Reliance:** Executive orders and memos encourage greater reliance on commercial AI technologies. * **Federal Aviation Administration (FAA):** * **Air Traffic Control:** AI software is being tested to assist air traffic controllers, with the goal of reducing fatigue and distraction. Humans will remain in control, but AI may handle repetitive tasks and airspace monitoring. * **Staffing Impact:** Plans include "planning for less people" due to ongoing staff shortages. * **Other Uses:** AI is being explored for analyzing air traffic and crash data, and predicting aircraft maintenance needs. * **Safety Focus:** The FAA is investigating AI's role in improving safety, particularly in response to recent incidents. Air traffic controllers **do not currently use the technology**, but it's being used to scan incident reports for risks. * **U.S. Patent and Trademark Office (USPTO):** * **Patent Examination:** AI is being tested to automate parts of the patent examiner's job. * **Pilot Program:** Patent applicants can opt into a program where AI searches databases for similar patents, emailing applicants a list of the **10 most relevant documents**. * **Mandatory Use:** From **July 21**, examiners will be **"mandatory"** to use an AI-based search tool for similarity checks. * **Report Writing:** AI's ability to write reports and analyze data is being recognized as potentially beneficial for examiners. * **Delayed Rollout:** A new AI search tool's launch was moved quickly, raising concerns about staff understanding and potential delays. * **Transportation Security Administration (TSA):** * **Facial Recognition:** Facial recognition cameras for ID checks have been rolled out in over **200 airports nationwide** since 2022. The agency claims **over 99 percent accuracy** across all demographic groups tested, despite studies showing limitations, particularly for people of color. * **Automated Kiosks:** Experimentation with automated kiosks for pre-checked passengers is underway. * **Manpower Reduction:** While former officials stated AI was meant to enhance efficiency, contractors suggest the Trump administration's acceleration of AI projects could lead to a reduction in TSA officers. * **Internal Revenue Service (IRS):** * **Expanded AI Use:** Beyond internal queries and chatbots, the IRS is looking to off-load more significant tasks to AI, including managing tax audits. * **Centralization Goal:** The "end game" is to centralize IT and HR for the Treasury Department, with AI handling many functions. * **Oversight Concerns:** Concerns have been raised about the lack of oversight in this ambitious effort to centralize IRS work and feed it to AI. * **Modernization Plan:** The Treasury Department states that CIO Sam Corcos is implementing a long-delayed IRS modernization plan. * **Department of Veterans Affairs (VA):** * **Aggressive AI Adoption:** The VA is actively deploying AI, with **hundreds of uses** reported last year. * **REACH VET:** This algorithm prioritizes mental health assistance for veterans at high risk of suicide. An investigation found it previously prioritized White men, particularly those who are divorced or widowed, and did not adequately consider risk factors for female veterans. * **Algorithm Update:** The REACH VET algorithm has been updated to include factors specific to women, such as military sexual trauma, pregnancy, ovarian cysts, and infertility. * **Impact:** Since its launch in **2017**, REACH VET has identified over **117,000 at-risk veterans**. * **"High-Impact" Designation:** The Trump administration has replaced the Biden administration's "safety impacting" or "rights impacting" labels for sensitive programs with "high-impact." ### Notable Risks and Concerns: * **Unproven Technology:** Government watchdogs worry that the administration's automation drive, combined with potential layoffs, could give unproven AI an outsized role. * **Decision-Making Errors:** Elizabeth Laird of the Center for Democracy and Technology warns that if AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety. * **Mismatch with Citizen Expectations:** Laird highlights a "fundamental mismatch" between AI capabilities and what citizens expect from government. * **Disregard for Safety and Staff:** Some federal workers have expressed alarm at the administration's perceived disregard for safety and government staff. * **Facial Recognition Accuracy:** Despite TSA's claims, studies show facial recognition is not perfect and can be less accurate for people of color. ### Expert Opinions: * **Jennifer Pahlka** (former Deputy U.S. Chief Technology Officer): Believes AI can help government get tasks done faster. * **Sahil Lavingia** (former DOGE staffer): Advocates for aggressive AI deployment, stating no task should be off-limits for experimentation, "especially in war," and expressing a lack of trust in humans for "life and death tasks." This report underscores a significant shift in the U.S. government's approach to technology, with a strong emphasis on AI-driven automation under the Trump administration, while simultaneously raising critical questions about its implementation, oversight, and potential societal impact.

The government wants AI to fight wars and review your taxes

Read original at News Source

Elon Musk has receded from Washington but one of his most disruptive ideas about government is surging inside the Trump administration.Artificial intelligence, Musk has said, can do a better job than federal employees at many tasks — a notion being tested by AI projects trying to automate work across nearly every agency in the executive branch.

The Federal Aviation Administration is exploring whether AI can be a better air traffic controller. The Pentagon is using AI to help officers distinguish between combatants and civilians in the field, and said Monday that its personnel would begin using the chatbot Grok offered by Musk’s start-up, xAI, which is trying to gain a foothold in federal agencies.

Artificial intelligence technology could soon play a central role in tax audits, airport security screenings and more, according to public documents and interviews with current and former federal workers.Many of these AI programs aim to shrink the federal workforce — continuing the work of Musk’s U.

S. DOGE Service that has cut thousands of government employees. Government AI is also promised to reduce wait times and lower costs to American taxpayers.Government tech watchdogs worry the Trump administration’s automation drive — combined with federal layoffs — will give unproven technology an outsize role.

If AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety, said Elizabeth Laird, a director at the Washington-based nonprofit Center for Democracy and Technology.There is “a fundamental mismatch” between what AI can do and what citizens expect from government, she said.

President Joe Biden in 2023 signed an executive order aimed at spurring government use of AI, while also containing its risks. In January, President Donald Trump repealed that order. His administration has removed AI guardrails while seeking to accelerate its rollout.A comprehensive White House AI plan is expected this month.

“President Trump has long stressed the importance of American AI dominance, and his administration is using every possible tool to streamline our government and deliver more efficient results for the American people,” White House spokeswoman Anna Kelly said in a statement.The Washington Post reviewed government disclosures and interviewed current and former federal workers about plans to expand government AI.

Some expressed alarm at the administration’s disregard for safety and government staff. Others saw potential to improve efficiency.“In government, you have so much that needs doing and AI can help get it done and get it done faster,” said Jennifer Pahlka, who was deputy U.S. chief technology officer in President Barack Obama’s second term.

Sahil Lavingia, a former DOGE staffer who pushed the Department of Veterans Affairs to use AI to identify potentially wasteful spending, said government should aggressively deploy the technology becoming so prevalent elsewhere. Government processes are efficient today, he said, “but could be made more efficient with AI.

”Lavingia argued no task should be off limits for experimentation, “especially in war.”“I don’t trust humans with life and death tasks,” he said, echoing a maximalist view of AI’s potential shared by some DOGE staffers.Here’s how AI is being deployed within some government agencies embracing the technology.

Waging warReturn to menuThe Pentagon is charging ahead with artificial intelligence this year. The number of military and civilian personnel using NGA Maven, one of the Pentagon’s core AI programs, has more than doubled since January, said Vice Adm. Frank Whitworth, director of the National Geospatial-Intelligence Agency, in a May speech.

The system, launched in 2017, processes imagery from satellites, drones and other sources to detect and identify potential targets for humans to assess. More than 25,000 U.S. military and civilian personnel around the world now use NGA Maven.NGA Maven is being expanded, Whitworth said, to interpret data such as audio and text in conjunction with imagery, offering commanders a “live map” of military operations.

The aim is to help it better distinguish combatants from noncombatants and enemies from allies, and for units using NGA Maven to be able to make 1,000 accurate decisions about potential targets within an hour.The Pentagon’s AI drive under Trump will give tech companies like data-mining firm Palantir a larger role in American military power.

A White House executive order and a Defense Department memo have instructed federal officials to rely more on commercial technology.In May, the Defense Department announced it was more than doubling its planned spending on a core AI system that is part of NGA Maven called Maven Smart System, allocating an additional $795 million.

The software, provided by Palantir, analyzes sensor data to help soldiers identify targets and commanders to approve strikes. It has been used for planning logistics to support deployed troops.Air traffic controlReturn to menuThe Federal Aviation Administration is testing whether AI software can reliably aid air traffic controllers, according to a person with knowledge of the agency’s plans who spoke on the condition of anonymity to avoid retaliation.

Humans would remain in the loop, the person said, but AI would help reduce fatigue and distraction. Air traffic control staff would continue to communicate with pilots, for example, but AI might handle repetitive and data-driven tasks, monitoring airspace more generally.Due in part to ongoing staff shortages in air traffic control, the agency’s AI plans include “planning for less people,” the person said.

Other uses for AI being explored at the FAA include analyzing air traffic or crash data and predicting when aircraft are likely to need maintenance, the person said.The FAA sees artificial intelligence as a potential tool to address airline safety concerns that were brought to the fore by the January midair collision that killed more than 60 people near Reagan National Airport.

“The FAA is exploring how AI can improve safety,” the agency said in a unsigned statement, but air traffic controllers do not currently use the technology. That includes using the technology to scan incident reports and other data to find risks around airports with a mixture of helicopter and airplane traffic, the statement said, while emphasizing humans will remain in charge.

“FAA subject matter experts are essential to our oversight and safety mission and that will never change,” the statement said.Examining patentsReturn to menuThe U.S. Patent and Trademark Office wants to test whether part of the job of patent examiners — who review patent applications to determine their validity — can be replaced by AI, according to records obtained by The Post and an agency employee who spoke on the condition of anonymity to describe internal deliberations.

Patent seekers who opt into a pilot program will have their applications fed into an AI search tool that will trawl the agency’s databases for existing patents with similar information. It will email applicants a list of the 10 most relevant documents, with the goal of efficiently spurring people to revise, alter or withdraw their application, the records show.

From July 21, per an email obtained by The Post, it will become “mandatory” for examiners to use an AI-based search tool to run a similarity check on patent applications. The agency did not respond to a question asking if it is the same technology used in the pilot program that will email patent applicants.

The agency employee said AI could have an expansive role at USPTO. Examiners write reports explaining whether applications fall afoul of patent laws or rules. The large language models behind recent AI systems like ChatGPT “are very good at writing reports, and their ability to analyze keeps getting better,” the employee said.

This month, the agency had planned to roll out another new AI search tool that examiners will be expected to use, according to internal documents reviewed by The Post. But the launch moved so quickly that concerns arose that USPTO workers — and some top leaders — did not understand what was about to happen.

Some staff suggested delaying the launch, the documents show, and it is unclear when it will ultimately be released.USPTO referred questions to the Commerce Department, which shared a statement from an unnamed spokesperson. “At the USPTO, we are evaluating how AI and technology can better support the great work of our patent examiners,” the statement said.

Airport security screeningReturn to menuYou may see fewer security staff next time you fly as the Transportation Security Administration automates a growing number of tasks at airport checkpoints.TSA began rolling out facial recognition cameras to check IDs in 2022, a program now live in more than 200 airports nationwide.

Despite studies showing that facial recognition is not perfect and less accurate at identifying people of color, the agency says it is more effective at spotting impostors than human reviewers. A federal report this year found TSA’s facial recognition is more than 99 percent accurate across all demographic groups tested.

The agency says it is experimenting with automated kiosks that allow pre-checked passengers to pass through security with “minimal to no assistance” from TSA officers.During the Biden administration, these and other AI efforts at TSA were aimed at helping security officers be more efficient — not replacing them, said a former technology official at the Department of Homeland Security, TSA’s parent agency, who spoke on the condition of anonymity to discuss internal matters.

“It frees up the officer to spend more time interacting with a passenger,” the former official said.The new Trump administration has indicated it wants to accelerate AI projects, which could reduce the number of TSA officers at airports, according to Galvin Widjaja, CEO of Austin-based Lauretta.io, a contractor which works with TSA and DHS on tools for screening airport travelers.

“If an AI can make the decision, and there’s an opportunity to reduce the manpower, they’re going to do that,” Widjaja said in an interview.Russ Read, a spokesman for TSA, said in an emailed statement that “the future of aviation security will be a combination of human talent and technological innovation.

”Tax auditsReturn to menuThe Internal Revenue Service has an AI program to help employees query its internal manual, in addition to chatbots for a variety of internal uses. But the agency is now looking to off-load more significant tasks to AI tools.Once the new administration took over, with a mandate from DOGE that targeted the IRS, the agency examined the feasibility of deploying AI to manage tax audits, according to a person familiar with the matter, speaking on the condition of anonymity for fear of retribution.

The push to automate work so central to the IRS’s mission underscores a broader strategy: to delegate functions typically left to human experts to powerful software instead. “The end game is to have one IT, HR, etc., for Treasury and get AI to do everything,” the person said.A DOGE official, start-up founder Sam Corcos, has been overseeing work to deploy AI more broadly at the IRS.

But the lack of oversight of an ambitious effort to centralize the work of the IRS and feed it to a powerful AI tool has raised internal worries, the person said.“The IRS has used AI for business functions including operational efficiency, fraud detection, and taxpayer services for a long time,” a Treasury Department spokeswoman said in a statement.

“Treasury CIO Sam Corcos is implementing the fulsome IRS modernization plan that taxpayers have deserved for over three decades.”Caring for veteransReturn to menuIn April, the Department of Veterans Affairs’s top technology official emailed lieutenants with his interpretation of the Trump administration’s new AI policy.

“The message is clear to me,” said Charles Worthington, who serves as VA’s chief technology officer and chief AI officer. “Be aggressive in seizing AI opportunity, while implementing common sense safeguards to ensure these tools are trustworthy when they are used in VA’s most sensitive areas such as benefit determinations and health care.

” The email was published to VA’s website in response to a public records request.VA said it deployed hundreds of uses of artificial intelligence last year, making it one of the agencies most actively tapping AI based on government disclosures. Among the most controversial of these programs has been REACH VET, a scoring algorithm used to prioritize mental health assistance to patients predicted to be at the highest risk of suicide.

Last year, an investigation by the Fuller Project, a nonprofit news organization, found that the system prioritized help to White men, especially those who have been divorced or widowed — groups studies show to be at the highest risk of suicide.VA acknowledged that REACH VET previously did not consider known risk factors for suicide in female veterans, making it less likely that women struggling with thoughts of suicide would be flagged for assistance.

Pete Kasperowicz, a VA spokesman, said in an email that the agency recently updated the REACH VET algorithm to account for several new risk factors specific to women, including military sexual trauma, pregnancy, ovarian cysts and infertility. Since the program launched in 2017, it has helped identify more than 117,000 at-risk veterans, prompting staff to offer them additional support and services, he said.

REACH VET was one of over 300 AI applications that the Biden administration labeled “safety impacting” or “rights impacting” in annual transparency reports. The Trump administration, which has derided the “risk-averse approach of the previous administration,” discontinued those labels and will instead denote sensitive programs as “high-impact.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts