I’m a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here

I’m a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here

2025-08-02Technology
--:--
--:--
Aura Windfall
Good morning 韩纪飞, I'm Aura Windfall, and this is Goose Pod for you. Today is Sunday, August 3rd. It's a pleasure to have you with us for this personalized session, where we explore the truths that shape our world.
Mask
And I'm Mask. We're not here to waste time. We're here to discuss a reality that many are too slow to accept: The AI-powered fraud crisis isn't a future headline; it's the current, undeclared war on our digital identity.
Aura Windfall
Let's get started. The numbers themselves are just staggering. One report I saw mentioned that scammers have already made off with over 12.5 billion dollars. It’s a figure so large it’s hard to even comprehend the scale of the theft. It’s not just numbers; it’s lives.
Mask
Comprehend it. It’s the cost of complacency. While leaders are ‘warning’ that a crisis is coming, criminal networks are cashing checks. They're not waiting for an invitation. They're using AI to research, edit, and launch attacks that are overwhelming our laughably outdated systems.
Aura Windfall
And what I know for sure is that this touches real people in profound ways. In Wisconsin alone, victims lost over 106 million dollars. Think about the spirit of a community, the trust people have, being eroded by these imposter scams. It’s a violation on a massive scale.
Mask
It's a hostile takeover of trust. These aren't just isolated incidents. We're seeing sophisticated, organized crime. The Bounty Hunter Bloods street gang, for instance, is now involved in federal racketeering that includes fraud. This is a diversified, criminal enterprise model.
Aura Windfall
It’s terrifying to see these worlds collide. And now, they have AI as a weapon. The Safe House Project calls AI an 'essential weapon' in fighting trafficking, another insidious financial crime. But it’s clear that this weapon is being used by both sides with incredible force.
Mask
Of course it is. Any powerful tool is a weapon in the right or wrong hands. Traffickers use AI to 'track the untrackable,' but so do the criminals stealing benefits. They process millions of data points to find vulnerabilities and exploit them in real-time. It's an arms race.
Aura Windfall
That’s a chilling thought. The technology meant to connect and protect us is being turned into a tool for exploitation. It makes you question the very fabric of our digital lives when your identity can be so easily mimicked or stolen by an algorithm.
Mask
The fabric was already frayed. We're just seeing who has the sharper scissors. The crisis isn't the AI itself; it's the institutional inertia. We have systems designed for a world that no longer exists, and criminals are simply capitalizing on that gap with ruthless efficiency.
Aura Windfall
And the CEO in our source article points out this isn't just about banks anymore. It's hitting every part of our government, from disaster relief to unemployment. These are the systems designed to be our safety nets, and they're being systematically plundered. What is the true purpose of a safety net if it has giant holes?
Mask
The purpose is irrelevant if the execution is flawed. These systems are a buffet, as the CEO said. One fraud ring can file tens of thousands of fake claims in a single day. It's a numbers game, and they have the automation to win it. We are being outmaneuvered.
Aura Windfall
To understand how we became so vulnerable, I think we have to look at the story of identity itself. For thousands of years, it was simple: we remembered faces. We used jewelry or tattoos to signal who we were. There was a physical, tangible truth to it.
Mask
Quaint, but completely unscalable. The moment society grew beyond a small tribe, that system became obsolete. Governments have been playing catch-up ever since, from the Babylonian census in 3800 BC to passports in 1414. They are instruments of control, not trust.
Aura Windfall
But weren't they about creating a shared sense of identity, a record of our existence? When the U.S. started issuing Social Security numbers in 1936, it was about creating a system of support. There was a purpose behind it that was meant to be for the collective good.
Mask
It was about data management. Good intentions don't build secure systems. The moment we created unique numbers and photos for ID, we created a template for forgery. The problem is, the forgers have always been more innovative than the gatekeepers. They move faster and break things.
Aura Windfall
And then came the digital age. In 1977, the U.S. began computerizing records, connecting databases. It feels like that's when the doors were truly opened for the kind of large-scale problems we see today. We embraced the efficiency without fully grasping the new vulnerabilities we were creating.
Mask
Exactly. We built a digital house of cards. Then came advanced biometrics—fingerprints, facial recognition. We were told this was the solution. Apple put a fingerprint sensor on the iPhone in 2013. It felt futuristic, but it was just another lock for criminals to pick. And they are.
Aura Windfall
It seems like a constant escalation. We develop a new method of verification, and almost immediately, someone is working to defeat it. It’s a story of innovation on both sides, but one side is bound by rules and ethics, and the other is not. That feels like a fundamental imbalance.
Mask
That's the entire game. While governments and corporations were congratulating themselves on their new biometric toys, criminal networks were scaling up. AI didn't create this problem; it just put it on hyperdrive. It allows them to automate the entire process of fraud, from creating synthetic identities to filing claims.
Aura Windfall
The article from PwC and Stop Scams UK really highlights this. It says AI will make it easier for criminals to impersonate trusted institutions, friends, and family. It raises such profound questions about trust. How can you believe what you see or hear when it can all be faked?
Mask
You can't. Not anymore. Trust is a legacy concept. The Governor of the Bank of England himself said the time to act is now. The threat is here, but he also admits there's limited evidence AI is behind a large number of attacks *now*. He's still underestimating the enemy.
Aura Windfall
But he also points to the other side of the coin, that AI will better enable banks to identify and prevent fraud. It’s that dual-use dilemma again. It’s a tool, and its impact depends entirely on the intention of the person wielding it. It’s a reflection of our own duality.
Mask
Intention is a luxury. Results are what matter. Right now, the criminals are getting better results. They're using generative AI to create fake text, images, and voice clones that are increasingly convincing. Our defenses are based on detecting yesterday's attacks, not tomorrow's. We're perpetually one step behind.
Aura Windfall
And the pandemic was the perfect storm. The cybersecurity CEO testified that we saw a glimpse of this then, with billions stolen from unemployment benefits. It wasn't just masks fooling facial recognition; it was AI-generated fake identities and voice clones overwhelming the system. A true crisis.
Mask
A crisis that should have been a wake-up call, but it was more like a snooze button. The Small Business Administration Inspector General estimates $200 billion was stolen. That’s not a glimpse; that’s a catastrophe. And the tactics are now more advanced and fully automated. We learned nothing.
Aura Windfall
This brings us to the heart of the conflict. On one hand, there's this incredible push for innovation. McKinsey estimates generative AI could add up to 4.4 trillion dollars in economic value. Companies see it as a high priority. There's a powerful momentum that feels almost unstoppable.
Mask
As it should be. Stagnation is death. The problem isn't the ambition; it's the execution. The same McKinsey survey found that 91% of organizations don't feel prepared to implement AI responsibly. They want the reward without managing the risk. It's a failure of leadership.
Aura Windfall
And that’s a terrifying gap. There’s a risk of inaccurate outputs, bias, misinformation, and as the report says, malicious influence on politics and well-being. What I know for sure is that when you rush innovation without wisdom, you create unintended consequences that can cause deep harm.
Mask
Harm is part of disruption. Early indications are that generative AI can already defeat standard anti-fraud biometric checks. That’s not a risk; that's a present-day failure. The conflict is between those who want to move fast and break things and those who want to form a committee about it. We need to move faster.
Aura Windfall
But moving faster without guardrails is how we get into this mess. The Biden administration's executive order on AI seems like an attempt to build those guardrails, to create some kind of governance. But then you have figures like Elon Musk proposing a centralized government database, DOGE. That raises huge red flags.
Mask
Breaking down information silos is the only way to fight network-based threats. The argument that government databases 'don't talk to each other' is the core vulnerability. Of course, creating a central database is risky, but leaving things as they are is a guaranteed loss. We need to centralize and secure, not stay fragmented and weak.
Aura Windfall
But who watches the watchers? That's the fear. People compare DOGE to DARPA's 'Total Information Awareness' program, which was shut down over privacy concerns. A centralized database with our tax returns, health records, and Social Security numbers is a terrifying prospect. It feels like a massive expansion of surveillance power.
Mask
And a massive treasure trove for hackers if it's not secured properly, like the OPM data breach that affected 22 million people. The risk is immense. But the alternative, letting billions be siphoned off by criminals, is also unacceptable. The conflict is choosing between two dangerous paths. I say, take the bolder path.
Aura Windfall
Bolder can sometimes mean more reckless. The article points out that an anti-fraud tool at the Social Security Administration flagged only 2 out of 110,000 claims but slowed down the entire process by 25%. We could be building systems that are not only invasive but also incredibly inefficient. That helps no one.
Aura Windfall
The impact of all this is where my heart really sinks. We're talking about systems that are supposed to help people at their most vulnerable. The article on AI in government finance mentions the CARES Act and the American Rescue Plan—trillions of dollars meant for relief. The impact of fraud here is devastating.
Mask
It's a direct tax on the poor and needy, collected by criminals. The impact is that our government is projected to spend over $6 trillion, and a significant percentage is just leaking out. Historically, fraud was found by whistleblowers or audits. In 2010, less than 1% was found by tech. We are still in the dark ages.
Aura Windfall
But there is a glimmer of hope. The article notes that 45% of federal agencies are now starting to use AI. The SEC is using it to detect financial fraud, the IRS for tax returns, and CMS for Medicare. It seems like the shift is finally beginning to happen, even if it's slow.
Mask
Beginning to happen is not good enough. The CMS algorithm helped prevent or identify nearly $1.5 billion in fraudulent payments over four years. That sounds impressive until you remember that $200 billion was lost from unemployment funds alone. It's a drop in the ocean. The return on investment is there, but the investment is pitifully small.
Aura Windfall
What I know for sure is that creating a culture of innovation within these large, bureaucratic institutions is the real challenge. It's not just about buying new software. It's about changing mindsets, encouraging lifelong learning, and breaking through that inertia. It's a human challenge more than a technical one.
Mask
It's a leadership challenge. The potential is there. One analysis suggests AI could deliver an additional $13 trillion in global economic output by 2030. Detecting fraud is just one piece of that. The impact of not adopting this technology aggressively is falling behind permanently while criminals and other nations surge ahead.
Aura Windfall
So the impact is a choice: we either embrace this technology with wisdom and courage to protect our most vital systems, or we watch them crumble under the weight of sophisticated, AI-driven attacks, leaving the people who depend on them with nothing. That's a very stark choice indeed.
Aura Windfall
Looking to the future, it's clear we're at a turning point. The author of our main article proposes something he calls 'Altman's Law': that AI capabilities double every 180 days. That's an incredible, almost terrifying pace of change. It feels like we're all trying to catch a speeding train.
Mask
It's not terrifying; it's an opportunity. Moore's Law gave us decades of innovation. This new law means the tools are getting exponentially better, faster. The future belongs to those who can harness this growth. Financial institutions are already using AI to detect fraud in milliseconds. That's the future.
Aura Windfall
And it seems many are trying to get on board. A McKinsey report highlights 'Agentic AI'—systems that can independently plan and execute complex tasks—as a fast-growing trend. The idea of a 'virtual coworker' is both exciting and a little unsettling. It challenges our very definition of work.
Mask
It should. The future isn't about incremental improvements; it's about revolutionary change. The risk isn't thinking too big; it's thinking too small. While leaders worry, their employees are already using these tools. The workforce is ready; the leadership is lagging. That has to change, now.
Aura Windfall
That's the end of today's discussion. What I know for sure is that AI is not a distant storm; it is the weather we are all living in today. We must face it with open eyes and a commitment to protecting our shared humanity. Thank you for listening to Goose Pod, 韩纪飞.
Mask
The race is on. The criminals are using AI better than we are. Until that changes, we are losing. That's the takeaway. See you tomorrow.

Here's a comprehensive summary of the provided news article: ## AI-Powered Fraud Crisis: An Urgent Reality, Not a Future Threat **News Title:** I’m a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here **Report Provider:** Fortune **Author:** Haywood Talcove (CEO, LexisNexis Risk Solutions) **Published:** July 31, 2025 This article, a commentary piece by Haywood Talcove, CEO of LexisNexis Risk Solutions, argues that the crisis of AI-powered fraud is not a future threat, but a present and escalating reality that is already overwhelming existing government systems. Talcove directly challenges the notion, attributed to Sam Altman, that this crisis is "coming very soon," asserting that it is "already happening" and impacting "every part of our government." ### Key Findings and Conclusions: * **AI Fraud is Pervasive and Escalating:** AI-generated fraud is actively siphoning millions of dollars weekly from public benefit systems, disaster relief funds, and unemployment programs across the United States. * **Criminals Outpace Defenses:** Criminal networks are leveraging advanced AI tools like deepfakes, synthetic identities, and large language models to exploit and defeat outdated fraud defenses, including easily spoofed single-layer tools like facial recognition. * **Past Crises as Precedent:** The pandemic-era fraud, where hundreds of billions in unemployment benefits were stolen, serves as a stark example. This was not solely due to simple bypasses of facial recognition but involved AI-generated fake identities, voice clones, and forged documents that overwhelmed inadequate systems. * **Current Tactics are More Advanced:** Today's AI-driven fraud tactics are more sophisticated and fully automated, making them faster, cheaper, and more scalable than ever before. * **"Altman's Law" - A New Exponential Growth:** Talcove proposes a principle he calls "Altman's Law," suggesting that AI capabilities are doubling every 180 days, mirroring the exponential growth predicted by Moore's Law for computing power. This rapid advancement necessitates an equally rapid modernization of defense systems. * **Urgent Need for Modernized Defenses:** The current infrastructure is "permanently outmatched" if defenses are not updated at the same pace as AI advancements. ### Key Statistics and Metrics: * **$200 Billion Stolen from Pandemic Unemployment:** The Small Business Administration Inspector General estimates that nearly $200 billion was stolen from pandemic-era unemployment insurance programs, marking it as one of the largest fraud losses in U.S. history. * **Billions Stolen Monthly from SNAP:** The USDA SNAP program is experiencing billions of dollars stolen nationwide every month, becoming a significant target for fraudsters. * **Tens of Thousands of Claims in a Single Day:** A single fraud ring, using AI, can file tens of thousands of fake claims across multiple states in just one day, with many being processed automatically due to insufficient detection capabilities. ### Important Recommendations: * **Layered Identity Verification:** Implement advanced identity verification methods that go beyond single-layer tools like facial scans or passwords. * **Real-Time Data and Behavioral Analytics:** Utilize real-time data and behavioral analytics to identify anomalies before funds are disbursed. * **Cross-Jurisdictional Tools:** Employ tools that can flag suspicious activities across different state lines and jurisdictions. * **Revive Proven Systems:** Reintroduce and modernize effective tools, such as the National Accuracy Clearinghouse, which previously flagged billions in duplicate benefit claims. ### Significant Trends or Changes: * **Shift from Future Threat to Present Reality:** The primary shift highlighted is that AI fraud is no longer a looming concern but an active and destructive force. * **Generative AI as a Weapon:** Generative AI is being effectively weaponized by organized crime groups (both domestic and transnational) to mimic identities, create synthetic documentation, and flood systems with fraudulent claims. * **Criminals Outperforming Protectors:** Currently, criminals are more adept at using AI for malicious purposes than governments and security agencies are at defending against it. ### Notable Risks or Concerns: * **Vulnerable Systems:** The most vulnerable government systems and the citizens who rely on them remain exposed due to outdated defenses. * **Exponential Escalation:** The scale and sophistication of AI attacks will increase rapidly as AI capabilities continue to evolve exponentially. * **Theft from the American People:** The fraud is not just against the government but directly impacts the financial well-being of American citizens. ### Material Financial Data: * **$200 Billion:** Estimated loss from pandemic-era unemployment insurance programs. * **Billions:** Monthly losses from the USDA SNAP program. * **Billions:** Amount flagged by the National Accuracy Clearinghouse before its shutdown. In essence, the article serves as a critical call to action, emphasizing that the current approach to cybersecurity and fraud prevention is woefully inadequate against the rapidly advancing capabilities of AI-powered criminal enterprises. The author stresses the immediate need for significant investment in modern, multi-layered defense systems to counter this escalating threat.

I’m a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here

Read original at Fortune

Sam Altman recently warned that AI-powered fraud is coming “very soon,” and it will break the systems we rely on to verify identity.It is already happening and it’s not just coming for banks; it’s hitting every part of our government right now.Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programs.

Criminal networks are already using deepfakes, synthetic identities, and large language models to outpace outdated fraud defenses, including easily spoofed, single-layer tools like facial recognition, and they’re winning.We saw a glimpse of this during the pandemic, when fraud rings exploited gaps in state systems to steal hundreds of billions in unemployment benefits.

It wasn’t just people wearing masks to bypass facial recognition. It was AI-generated fake identities, voice clones, and forged documents overwhelming systems that weren’t built to detect them. Today, those tactics are more advanced, and fully automated.I work with over 9,000 agencies across the country.

As I testified before the U.S. House of Representatives twice this year, what we’re seeing in the field is clear. Fraud is faster, cheaper, and more scalable than ever before. Organized crime groups, both domestic and transnational, are using generative AI to mimic identities, generate synthetic documentation, and flood our systems with fraudulent claims.

They’re not just stealing from the government; they’re stealing from the American people.The Small Business Administration Inspector General now estimates that nearly $200 billion was stolen from pandemic-era unemployment insurance programs, making it one of the largest fraud losses in U.S. history.

Medicaid, IRS, TANF, CHIP, and disaster relief programs face similar vulnerabilities. We have also seen this firsthand in our work alongside the U.S. Secret Service protecting the USDA SNAP program, which has become a buffet for fraudsters with billions stolen nationwide every month. In fact, in a single day using AI, one fraud ring can file tens of thousands of fake claims across multiple states, most of which will be processed automatically unless flagged.

We’ve reached a turning point. As AI continues to evolve, the scale and sophistication of these attacks will increase rapidly. Just as Moore’s Law predicted that computing power would double every two years, we’re now living through a new kind of exponential growth. Gordon Moore, Intel’s co-founder, originally described the trend in 1965, and it has guided decades of innovation.

I believe we may soon recognize a similar principle for AI that I call “Altman’s Law”: every 180 days, AI capabilities double.If we don’t modernize our defenses with the same pace as technological advancements, we’ll be permanently outmatched.What we desperately need is smarter tools and infrastructure, not more bureaucracy.

That means layering advanced identity verification, not just facial scans or passwords. It means using real-time data, behavioral analytics, and cross-jurisdictional tools that can flag anomalies before money goes out the door. It also means reviving what has already worked: tools like the National Accuracy Clearinghouse, which flagged billions of dollars in duplicate benefit claims across state lines before it was shut down.

AI is a force multiplier, but it can be weaponized more easily than it can be wielded for protection. Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America. Explore this year's list.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

I’m a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here | Goose Pod | Goose Pod