Cybercrime Hit $20.9 Billion in 2025. Here's Why AI-Powered Scams Are the Real Game-Changer
Cybercrime losses in the United States reached $20.9 billion in 2025, a 26% jump from the previous year, according to the FBI's Internet Crime Complaint Center (IC3). The agency received over 1 million complaints, roughly one every 31 seconds. But the most alarming trend isn't the total dollar amount,it's how artificial intelligence is fundamentally changing the nature of fraud itself. While traditional cybercrime exploits system vulnerabilities, AI-powered scams exploit something far harder to defend: human trust.
How Are Criminals Actually Using AI to Commit Fraud?
In 2025, the IC3 received over 22,000 complaints specifically mentioning artificial intelligence, with losses exceeding $893 million. That figure almost certainly understates the problem, because most victims have no way of knowing AI was used against them. The technology makes scams dramatically more convincing, scalable, and personal. A single criminal operation can now maintain thousands of simultaneous fake relationships, generate professional-sounding emails impersonating your employer, or clone the voice of your grandchild.
The methods are diverse and increasingly sophisticated:
- Voice Cloning: AI tools can replicate someone's voice from just a few seconds of audio. Criminals use this to call older relatives claiming a loved one is in trouble and urgently needs money. Victims reported losses exceeding $5 million to these distress scams in 2025.
- AI Romance Scams: Fake romance profiles powered by AI chatbots sustain weeks or months of convincing fake relationships, generating personalized messages at massive scale. Confidence and romance fraud cost Americans $929 million in 2025.
- Deepfake Video Endorsements: AI-generated videos of celebrities, financial commentators, and trusted figures endorse fake investment platforms. Investment scams with a confirmed AI link resulted in losses exceeding $632 million.
- Business Email Impersonation: AI generates convincing emails mimicking company CEOs, HR departments, or suppliers, directing employees to wire funds or click malicious links. Businesses reported over $30 million in losses to AI-assisted scams of this type.
Why Is AI Making Fraud So Much Harder to Detect?
Traditional cybersecurity defenses focus on protecting systems: firewalls block malicious traffic, antivirus software detects malware, and email filters catch phishing attempts. But AI-powered fraud doesn't primarily attack systems. It attacks trust. A deepfake video of a trusted financial expert recommending an investment platform looks real because it is, technically, a perfect replica. A voice clone of your grandmother sounds real because it is. An AI-generated email from your CEO reads like your CEO because it mimics their exact communication style.
The problem is compounded by the fact that public AI tools have made this capability accessible to anyone. What once required advanced engineering resources and significant technical expertise can now be created using low-cost AI platforms available to nearly anyone with an internet connection. This has dramatically lowered the barrier to entry for cybercriminals, allowing even low-skilled threat actors to create highly effective attack campaigns.
Investment fraud emerged as the single biggest source of cybercrime losses in 2025, costing Americans $8.6 billion, more than double the losses recorded just two years earlier. The vast majority of this is driven by cryptocurrency investment scams, which alone accounted for $7.2 billion. The playbook is consistent: a stranger makes contact through a text, social media message, dating app, or even a misdirected text that starts a friendly conversation. After building trust over days or weeks, they introduce an investment opportunity, usually cryptocurrency, promising extraordinary returns. Victims are shown fake dashboards with soaring profits. Then, when they try to withdraw their money, they're told they owe taxes or fees first. The scammer disappears with everything.
What Can Individuals and Businesses Actually Do to Protect Themselves?
The FBI launched Operation Level Up specifically to intercept victims of investment scams before they lose everything. In 2025 alone, the operation notified 3,780 victims and saved an estimated $225 million in potential losses. Critically, 78% of those victims had no idea they were being scammed when the FBI contacted them. This suggests that detection and intervention are possible, but they require a different approach than traditional cybersecurity.
For individuals and organizations, defense against AI-powered fraud requires a combination of behavioral awareness, operational controls, and governance:
- Establish Verification Protocols: If a call, video, or message creates a strong emotional reaction and urges you to act quickly, stop, hang up, and call the person back on a number you already know. AI-generated voices and deepfakes cannot call you back on a verified number.
- Create Family Safe Words: Establish a code only your household knows that anyone can use to verify a genuine emergency call. This simple step can prevent voice cloning scams from succeeding.
- Implement Secondary Verification for Transactions: Organizations should require secondary verification procedures for high-value transactions, payment approvals, and sensitive data requests. Traditional single-factor authentication is no longer sufficient.
- Govern AI Usage Within Your Organization: Shadow AI, where employees use public AI platforms without organizational oversight, increases deepfake risk significantly. Organizations need centralized oversight, approved AI platforms, and secure AI governance to reduce exposure.
- Be Skeptical of Urgency: Nearly every successful fraud in the FBI's report exploited manufactured urgency. Legitimate banks, government agencies, and businesses do not ask you to act immediately.
- Never Invest Based on Online Tips: Never invest based on a tip from someone you met online, no matter how long you've been speaking or how legitimate their platform looks. Check whether any investment platform is registered with the SEC before sending money.
For mid-market businesses, the challenge is especially urgent. Many organizations are adopting AI tools faster than governance, operational controls, and security oversight can evolve. At the same time, attackers are using the same AI technologies to scale phishing campaigns, automate fraud, and exploit human trust. Organizations that fail to address deepfake cybersecurity risk may face financial fraud, data leakage, compliance exposure, operational disruption, and reputational damage.
What Does the Broader Cybercrime Landscape Look Like Beyond AI?
While AI-powered fraud is the fastest-growing threat, it's not the only one. Cyber-enabled fraud, where criminals use the internet or technology to commit fraudulent acts, accounted for 85% of all losses in 2025. The top five crime types by financial damage were investment fraud, tech support scams, personal data breaches, confidence and romance scams, and government impersonation.
The average victim lost $20,699, which represents the kind of financial hit that can set a family back for years. These numbers often get reported as corporate or government problems, but in reality, the vast majority of complaints come from individuals: people checking their bank account, responding to a text, or clicking a link in what looked like a legitimate email. Every household with an internet connection is a potential target.
The good news is that most cybercrime is preventable. The threats are real, but so are the defenses. If you or someone you know is a victim of cybercrime, reporting it at ic3.gov is critical. Speed matters because the FBI's Recovery Asset Team can sometimes freeze stolen funds if they receive a report quickly. The convergence of AI and cybercrime represents one of the most significant cybersecurity challenges facing modern digital infrastructure, but awareness and verification protocols can meaningfully reduce risk.