43 States Are Building AI Governance Frameworks, But There's a Catch
As state agencies experiment with generative AI tools, a growing number are formalizing governance frameworks to manage the technology's risks and opportunities, though implementation varies dramatically across the country. A report published by the UC Berkeley School of Information found that 43 states have established some form of AI governance, but with significant differences in how they structure oversight, define accountability, and ensure transparency.
Why Are States Moving Toward Formal AI Governance?
State governments are increasingly recognizing that AI governance is becoming a core operational function rather than a standalone compliance exercise. Eric Hysen, a former chief information officer of the Department of Homeland Security, argues in the report that this shift reflects the reality of how AI is being deployed across state agencies. As agencies continue experimenting with tools like ChatGPT and other large language models (LLMs), the need for structured oversight has become urgent. The risks are real: AI systems can introduce bias, create cybersecurity vulnerabilities, and make consequential decisions about citizens' access to government services without proper safeguards.
The report emphasizes that when governments fail to manage these risks effectively, the impacts on constituents' lives can be severe. This includes challenges inherent to managing any information technology system, such as cybersecurity and system reliability, but also unique risks specific to AI, including algorithmic bias and unintended decision-making failures.
What Does a Functional AI Governance Framework Look Like?
The UC Berkeley report includes a playbook for governments deploying AI systems, offering best practices that several states are already adopting. Rather than treating AI governance as a separate compliance process, the playbook recommends integrating it directly into existing procurement, cybersecurity, and operational workflows. This approach helps ensure that AI oversight becomes embedded in how agencies make decisions about technology adoption, rather than being bolted on after the fact.
The playbook also calls for agencies to establish clear accountability structures and continuously monitor AI systems after deployment. This is critical because AI systems can degrade over time, produce unexpected outputs when deployed in new contexts, or amplify existing biases in data. Continuous monitoring helps catch these problems before they cause harm.
Steps to Implement Effective AI Governance in Government
- Create Governance Councils: Establish dedicated teams responsible for overseeing AI adoption, with clear roles for IT, legal, policy, and operational staff who understand both the technology and its potential impacts.
- Establish Human Oversight Rules: Define clear boundaries between automated decisions and those requiring human review, ensuring that consequential decisions about citizens remain under human control with meaningful oversight.
- Conduct Risk Assessments: Regularly scan for failures in AI design and implementation that could cause harm or bias, including vulnerabilities to cyberattacks targeting AI systems themselves.
- Protect Data Privacy: Implement guardrails preventing employees from entering sensitive or personally identifiable information into AI systems, and establish clear data governance policies.
- Monitor Continuously: Rather than treating deployment as the end of oversight, establish ongoing monitoring systems to detect performance degradation, bias drift, or unintended consequences over time.
Which States Are Leading the Way?
Several states have launched pilot programs and governance structures that align with the report's recommendations. Pennsylvania launched one of the nation's first statewide generative AI pilots after Governor Josh Shapiro's 2023 executive order directed agencies to responsibly explore the technology's use cases. The state's pilot project included guardrails prohibiting employees from entering sensitive or personally identifiable information into AI systems, a practical safeguard that prevents accidental exposure of citizen data.
California recently launched a statewide rollout of Engaged California, a public participation platform designed to gather resident feedback on how AI is affecting workers, government services, and the broader economy. This approach recognizes that AI governance should not be a top-down exercise; it should incorporate public input about how the technology is actually impacting communities.
Colorado and North Carolina have focused heavily on security and data governance as agencies test AI applications before deploying tools at scale. This cautious approach reflects an understanding that rushing to deployment without understanding security implications can create serious vulnerabilities.
What's the Challenge With Current State-Level Approaches?
While these early frameworks are promising, the report reveals a critical problem: wide variations in scope, structure, and transparency across the 43 states with governance frameworks. This fragmentation means that a citizen in Pennsylvania may have different protections than someone in California or Colorado when interacting with AI-powered government services. Without federal standards, states are essentially experimenting independently, which can lead to inconsistent protection of fundamental rights and unequal access to government services.
The report also stresses that AI's capabilities and risk landscape are constantly evolving. Governance frameworks must address not only extreme misuse scenarios but also the subtle, cumulative ways in which AI systems can shape human decision-making, wellbeing, and trust. This means that static policies written today may become obsolete as the technology advances, requiring agencies to build adaptive governance systems that can evolve alongside the technology itself.
The emergence of state-level AI governance frameworks represents an important step toward responsible AI deployment in government. However, the wide variations in approach underscore the need for greater coordination, transparency, and potentially federal guidance to ensure that all citizens receive consistent protections regardless of which state they live in.