Why States Are Racing to Write AI Rules Before Banks Do It Themselves
States are stepping in to create AI governance frameworks for financial services before the industry sets its own standards. Colorado's renewed effort to establish pragmatic artificial intelligence oversight in banking and fintech signals a shift in how regulators approach rapid technology deployment. Rather than waiting for federal rules, states are working with industry groups to build workable standards that protect consumers without stifling innovation.
What's Driving the Push for State-Level AI Rules?
Financial institutions are already deploying advanced AI tools across core functions like fraud detection, underwriting, compliance, and customer service. According to recent data, 44% of organizations were already using generative AI in more than five use cases as of 2025, up from just 7% a year earlier, with 65% planning to increase investment. This rapid adoption has outpaced regulatory clarity, leaving states to decide whether to act proactively or reactively.
The American Fintech Council (AFC), which represents over 150 member companies and innovative banks, submitted a letter to Colorado lawmakers expressing support for the state's governance work. The council emphasized that financial institutions are already deploying these technologies responsibly, but clear rules would provide consistency and durability across state lines.
"A thoughtful, risk-based approach to AI governance is essential to ensuring that financial institutions can continue to responsibly innovate while maintaining strong consumer protections and operational accountability," said Phil Goldfeder, CEO of the American Fintech Council.
Phil Goldfeder, CEO of the American Fintech Council
How Are Financial Institutions Actually Using AI Today?
The real-world applications of AI in finance are far broader than many people realize. Banks and fintech companies are using machine learning and language models to process financial data, make faster decisions, reduce manual work, and improve accuracy across multiple workflows. The strongest use cases tend to involve high-volume decisions, repeatable reviews, and measurable business outcomes.
Key areas where AI is delivering measurable value include:
- Fraud Detection: AI reviews card, ACH, account, and payment behavior to spot unusual patterns earlier and reduce false positives by scoring transactions with device, merchant, velocity, account, and behavioral signals.
- Credit and Underwriting: AI helps assess borrower risk, flag weak files, and surface inconsistencies in applications. AI-supported lending platforms can cut credit access from 10 working days to 24 hours and scale to 10,000 or more farmers.
- Anti-Money Laundering (AML) Compliance: AI helps compliance teams rank suspicious activity alerts, cluster related cases, and extract evidence from records. One finance case showed 3X faster delivery and a 70% cost reduction.
- Customer Service: AI-powered service copilots help agents work faster, cutting handovers by 35% and raising chat satisfaction scores by 900 basis points. One case managed 50,000 or more multilingual queries.
- Back-Office Finance: AI supports reconciliations, exception handling, document review, and evidence extraction, reducing repetitive work and improving reporting accuracy.
These applications show that AI in finance is not speculative or distant. It is already embedded in how banks and fintech companies operate, which is why regulators are moving quickly to establish clear governance.
What Three Principles Should Guide AI Governance in Finance?
The AFC outlined three core principles for effective AI governance in financial services. These principles are designed to align with existing risk management and compliance structures already in place at regulated institutions, rather than adding unnecessary complexity.
- Data Privacy and Security: Financial institutions must protect customer data and ensure that AI systems do not expose sensitive information. This principle builds on existing data protection requirements already mandated by federal and state law.
- Transparency in AI Operations: Financial institutions should be able to explain how AI systems make decisions, particularly in high-stakes areas like lending and fraud detection. This does not mean revealing proprietary algorithms, but rather demonstrating that decisions follow clear logic and can be audited.
- Meaningful Human Oversight: AI should enhance human decision-making, not replace it. Difficult cases, sensitive actions, and exceptions should be escalated to human analysts for review before final decisions are made.
According to the AFC, these principles align with existing supervisory frameworks and can be integrated into current compliance structures without adding unnecessary burden.
How Should Financial Institutions Implement AI Responsibly?
The most successful AI implementations in finance follow a structured approach that prioritizes transparency and control. Rather than treating AI as a black box, finance teams need clear approval processes, audit trails, exception handling, and rollback paths.
- Start Small and Measurable: Teams usually get the best results when they start with one workflow, one queue, and one clear key performance indicator. This allows institutions to test, learn, and scale responsibly.
- Build Hybrid Decision Systems: Combine rules with machine learning to handle clear cases quickly while escalating unclear ones to human analysts. This approach balances speed with accuracy.
- Monitor Continuously: Keep an eye on model drift, false positives, and the stability of outputs over time. Regular monitoring ensures that AI systems remain reliable and fair as they encounter new data patterns.
- Maintain Clear Governance: Establish approval processes, audit trails, and exception handling procedures. Finance teams need to know why an AI system made a particular decision and be able to override it if necessary.
This structured approach is why the AFC and other industry groups support state-level governance frameworks. Clear rules provide consistency and allow institutions to invest confidently in AI while maintaining consumer trust.
Why Does Colorado's Approach Matter Beyond the State?
Colorado's effort to establish pragmatic AI governance is significant because it could serve as a model for other states. Rather than imposing rigid or overly prescriptive requirements that could limit innovation, the framework emphasizes risk-based oversight that reflects how these technologies actually function in real-world financial services environments.
"A balanced AI framework should build on the strong risk management, compliance, and consumer protection systems that responsible financial institutions already maintain," noted Ashley Urisman, Director of State Government Affairs at the American Fintech Council.
Ashley Urisman, Director of State Government Affairs at the American Fintech Council
The stakes are high. If states do not establish clear, consistent standards, financial institutions may face a patchwork of conflicting regulations across different jurisdictions. Conversely, if regulations are too rigid, they could slow innovation and push fintech companies to relocate or reduce investment. Colorado's approach attempts to balance these competing interests by building on existing compliance structures and emphasizing practical oversight rather than prescriptive rules.
As AI adoption in finance accelerates, the regulatory landscape will likely become more complex. States like Colorado are positioning themselves as leaders in establishing workable governance frameworks that protect consumers while allowing responsible innovation to continue delivering better financial outcomes.