How Italy's Public Sector Is Building AI Governance Before the EU AI Act Takes Full Effect
Italy's National Institute for Insurance against Accidents at Work (INAIL) is pioneering a structured approach to AI governance that could serve as a model for European public administrations struggling to comply with the EU AI Act (Regulation 2024/1689) and emerging national regulations. Rather than waiting for regulatory deadlines, INAIL has proactively developed a cross-functional governance framework designed to manage AI risks while maximizing the technology's benefits for internal operations and citizen services.
Why Are European Public Administrations Rushing to Build AI Governance Frameworks?
Public administrations across Europe face a complex challenge: they must deliver efficient, transparent, and inclusive services while navigating an evolving regulatory landscape that includes the EU AI Act and new national legislation like Italy's Artificial Intelligence Act (Law 132/2025). INAIL's decision to establish formal AI governance was driven by multiple factors, including the need to adapt to these regulatory changes and findings from a 2023 maturity assessment that revealed gaps in how the institute managed its initial AI use cases.
The maturity assessment, conducted in collaboration with CINI (National Interuniversity Consortium for Informatics) and Accredia, Italy's accreditation organization, evaluated INAIL's AI initiatives against the ISO/IEC 42001:2023 AI Management System Standard. The results highlighted the need for clearer coordination mechanisms, defined responsibilities, and shared practices to manage AI risks and opportunities across the organization.
"The Institute's AI governance framework is guided by the strategic objective of maximizing the value generated by artificial intelligence while effectively managing the associated risks, in line with public-sector values and institutional responsibilities," stated Francesco Saverio Colasuonno, Head of the Data and Analytics Office at INAIL's Central Directorate for Digital Organisation.
Francesco Saverio Colasuonno, Head of Data and Analytics Office, INAIL
How Is INAIL Structuring Its AI Governance?
INAIL's framework is organized around five dedicated working groups, each addressing a critical dimension of AI governance. This cross-functional structure brings together relevant organizational units to ensure coherent oversight and systematic risk management across the institute.
- Training and Capacity Building: Led by the HR Directorate's Office, this group focuses on developing staff skills and organizational awareness around AI adoption and responsible use.
- Communication Team: Responsible for managing internal and external AI-related information, ensuring transparency and stakeholder engagement throughout the governance process.
- Processes Team: Focused on core business operations, risk management, data quality assessment, and cost-benefit analysis of AI-enabled initiatives.
- Policy Development Team: Supports the institute in establishing foundational governance documents, including an AI Code of Ethics and Conduct, comprehensive AI Policy, AI Manifesto, and institutional AI strategy drivers.
- Technical Working Group: Provides governance tools for cataloguing, classifying, and assessing the risk levels of AI-enabled systems across the organization.
This structure reflects a strategic shift from fragmented, incremental AI experimentation toward a systematic approach that ensures compliance with evolving regulations while aligning with core public-sector principles. The framework operates along two parallel paths: managing current AI use through regulatory compliance and systematizing governance practices, while simultaneously building organizational awareness, ethical safeguards, and skills development for sustainable, long-term AI adoption.
What Real-World AI Solutions Is INAIL Deploying?
INAIL is developing several innovative AI use cases aimed at improving operational efficiency and service delivery to citizens. Two projects, Archimede and Linguistic Mediator, were recently submitted to the Best Cases Award 2025 and received positive evaluations for their potential to drive meaningful improvements in public service delivery.
Archimede is a custom AI-powered solution built on Microsoft's Azure cloud AI services, including speech recognition, translation, and generative AI capabilities. The system was designed specifically to strengthen INAIL's internal governance, auditing, and compliance by detecting and anticipating errors, evasions, and fraud. Built on a scalable machine learning architecture, Archimede can process growing volumes of data and be extended to additional institutional workflows, ensuring long-term sustainability and flexibility. Although developed for INAIL, the institute is exploring the possibility of promoting its reuse across other public administrations, potentially extending its benefits beyond the organization.
How Can Public Administrations Implement AI Governance Effectively?
INAIL's approach offers practical steps that other European public bodies can adapt to their own contexts. The institute's framework demonstrates that effective AI governance requires both immediate compliance measures and long-term strategic planning.
- Conduct a Maturity Assessment: Evaluate existing AI use cases against established standards like ISO/IEC 42001:2023 to identify governance gaps and areas requiring improvement.
- Establish Cross-Functional Working Groups: Create dedicated teams covering training, communication, processes, policy development, and technical governance to ensure coordinated oversight across the organization.
- Develop Foundational Governance Documents: Create an AI Code of Ethics and Conduct, comprehensive AI Policy, and institutional AI strategy that align with national guidelines and European regulations.
- Integrate with Existing Data Governance: Build AI governance frameworks on top of existing data governance structures to ensure data quality, integrity, and continuity with broader digital transformation initiatives.
- Implement Cataloguing and Risk Assessment Tools: Deploy technical solutions for systematically cataloguing, classifying, and assessing the risk levels of AI-enabled systems across the organization.
What Legal Risks Do AI Systems Create for Public Administrations?
Beyond operational governance, public administrations must also understand the legal liability landscape surrounding AI errors and hallucinations. In the European Union, there is no separate legal category for "liability for hallucinations," meaning that AI errors are addressed through existing liability frameworks such as contractual liability, tort liability, or product liability.
When AI systems generate false information, determining legal responsibility requires analyzing who controlled the risk at each stage of the chain, which specific duty was breached, and whether a sufficiently strong causal link exists between the false outcome and the harm. For high-risk AI systems, the EU AI Act imposes strict obligations on developers to prevent or mitigate hallucinations, including requirements for adequate accuracy, robustness, cybersecurity, and control mechanisms throughout the system's lifecycle.
"The provider of the AI system has the utmost interest in ensuring that the system functions correctly," explained Joaquín Muñoz, partner responsible for privacy and data protection at Bird and Bird.
Joaquín Muñoz, Partner for Privacy and Data Protection, Bird and Bird
However, for general-use AI models, obligations focus primarily on transparency and technical documentation rather than strict hallucination prevention. This distinction is critical for public administrations: while developers of high-risk systems face regulatory requirements to mitigate errors, organizations using general-purpose AI tools bear significant responsibility for human oversight and verification of AI-generated outputs.
INAIL's governance framework addresses these legal risks by integrating data governance with AI governance, ensuring that the quality and integrity of data underpinning AI systems are maintained. By establishing clear policies, oversight mechanisms, and accountability structures, the institute is positioning itself to manage both the operational benefits and legal liabilities associated with AI adoption in the public sector.