Financial Groups Unveil Four-Part Plan to Stop AI-Powered Identity Fraud Before It Spreads
A coalition of major financial industry groups has released a comprehensive roadmap to combat AI-powered identity fraud, proposing federal policy changes and technology upgrades to address a threat that has grown exponentially in just three years. The joint paper from the American Bankers Association, the Better Identity Coalition, and the Financial Services Sector Coordinating Council outlines 20 recommendations organized into four strategic initiatives, targeting what has become one of the fastest-growing attack vectors in financial services.
How Serious Is the AI Identity Fraud Problem Right Now?
The scale of the threat has caught the attention of policymakers and security leaders alike. Deepfake incidents in the fintech sector increased 700% in 2023 compared to 2022, according to the report. Deloitte's Center for Financial Services projects that AI-enabled fraud losses in the United States could reach $40 billion by 2027, up from $12.3 billion in 2023, representing a compound annual growth rate of 32%. In 2021, 42% of Suspicious Activity Reports filed under the Bank Secrecy Act were tied to identity or authentication compromise, a baseline that has only worsened as generative AI tools have become more accessible and affordable.
The problem extends across ten distinct attack categories currently targeting financial institutions, including deepfakes used against identity verification systems, AI-generated phishing campaigns, synthetic identity creation, real-time deepfake fraud, and the use of AI agents for account takeovers. What makes this particularly urgent is the economics of the threat. Generative AI has compressed the effort required to craft convincing phishing campaigns from 16 hours of skilled work to just five prompts, according to IBM security researchers cited in industry analysis. Sixty percent of people have fallen victim to AI-automated phishing, according to research cited in the financial groups' report.
Why Is AI-Powered Phishing Scaling So Rapidly?
The acceleration of phishing attacks comes down to automation and cost reduction. Large language models (LLMs), which are AI systems trained on vast amounts of text to generate human-like responses, can automate the entire phishing process. That automation cuts the cost of phishing attacks by more than 95% and produces success rates equal to or greater than manually crafted campaigns. The problem is compounded by legacy authentication vulnerabilities. SMS-based one-time passcodes and push-based authenticator apps are phishable. Passwords are phishable. AI tools allow adversaries to exploit those weaknesses at a scale and speed that was previously uneconomical.
The broader context reveals an arms race that currently favors attackers. According to research cited in vendor announcements, 88% of organizations report falling victim to AI-powered security incidents within the past 12 months. KnowBe4's 2025 Phishing Threat Trends Report found that more than 82% of phishing emails analyzed contained indicators of AI assistance. A Hoxhunt analysis documented a 14-fold surge in AI-generated phishing over the 2025 holiday period alone.
What Four Initiatives Are Financial Groups Proposing?
The recommendations are organized into four strategic initiatives designed to be achievable within two to three years. Each addresses a different layer of the identity and authentication infrastructure:
- Identity Proofing and Verification: A Treasury Department-led task force would coordinate federal, state, and local agencies on closing the gap between physical credentials and their digital equivalents. Mobile driver's licenses using asymmetric public key cryptography are identified as one viable path, since a deepfake cannot spoof possession of a private cryptographic key. Expanding the Social Security Administration's Electronic Consent-Based Social Security Number Verification system (eCBSV) to account opening, background checks, and other identity validation use cases would give financial institutions a way to verify identities against an authoritative government source.
- Authentication Modernization: Regulators would be encouraged to push financial institutions toward phishing-resistant authentication, specifically FIDO security keys and passkeys, for both internal systems and customer-facing applications. Policymakers would also be asked to avoid creating restrictions that limit the use of data analytics for risk-based fraud detection.
- International Coordination: NIST, DHS, and Treasury would engage with counterparts in the European Union and other countries on digital wallet interoperability and standards. China and other adversaries are active in international standards bodies that cover digital identity and authentication, and U.S. participation in those bodies is constrained by budget and staffing limitations.
- Public Education: Treasury would run campaigns with CISA and financial institutions on deepfake threats, and a separate public awareness effort would focus on passkeys and other phishing-resistant tools.
Jeremy Grant, coordinator of the Better Identity Coalition, noted that passkey adoption is stronger than it may appear given how recently the technology arrived at scale. "We didn't really see passkeys start to emerge at scale in the consumer space until late 2023, and the fact that most consumers now know what they are not even three years later is notable, given how long it takes most new technology to find its way to consumers," Grant explained. He also identified a persistent misconception that complicates adoption efforts. Some people believe going passwordless makes them less secure, a view shaped by decades of guidance telling people to create strong, unique passwords. "That has not been an effective cybersecurity tool for a long time now, but that doesn't mean your average consumer understands this," Grant stated.
What's the Regulatory Gap Holding Back Progress?
Financial institutions currently operate under Bank Secrecy Act requirements for customer identity verification and under Federal Financial Institutions Examination Council guidance for authentication. Both are areas where regulators need to issue updated guidance to give institutions confidence in using newer credential technologies to meet existing compliance requirements. This regulatory clarity is essential because institutions need assurance that adopting modern authentication methods will satisfy their compliance obligations.
Grant emphasized that the threat extends well beyond financial services. "Deepfakes are not a sector-specific problem but a national problem," he said. "It's the same organized criminals and hostile nation-states exploiting the same core deficiencies in identity and authentication infrastructure to steal from banks, fintechs, health, retailers, cryptocurrency players, and government". He identified four of the 20 recommendations as having the broadest cross-sector impact: the state infrastructure grant program tied to NIST guidance, expanding eCBSV access, accelerating NIST's liveness detection guidance, and creating a multi-agency task force to monitor AI-driven identity threats.
Grant
The financial groups also noted interest in HR 7270, the Stop Identity Fraud and Identity Theft Act of 2026, which would have Treasury run a grant program covering both financial sector security and fraud in government benefits distribution. Meanwhile, vendors are racing to develop defensive technologies. IRONSCALES, an email security company, is demonstrating AI agents designed to perform continuous reconnaissance against an organization's public footprint, generate tailored attack simulations, and deliver forensic investigation of suspicious emails at the speed of a Level 2 analyst's assessment. The company is also extending deepfake protection for Microsoft Teams with enhanced voice detection that learns employee voice patterns passively from normal meeting participation.
The stakes are clear. Deepfake-driven fraud increased more than 700% year over year, according to Cyble's 2025 Executive Threat Monitoring data, and Gartner surveys indicate that 62% of organizations experienced a deepfake attempt in the past year. The financial industry's four-part plan represents an acknowledgment that no single technology or policy change will solve the problem. Instead, success requires coordinated action across identity infrastructure, authentication standards, international cooperation, and public awareness.