EU AI Act Faces Pressure to Simplify as Healthcare Regulators Demand Clearer Rules
The EU's landmark AI Act is entering a critical phase where healthcare regulators and industry groups are demanding significant clarifications to make the rules workable in practice. As the European Commission's Digital Omnibus reforms move into final negotiations, MedTech Europe has submitted formal feedback calling for clearer integration between the AI Act and other sectoral legislation, along with extended implementation timelines that give companies more time to comply.
Why Is Healthcare Regulation Becoming the Test Case for EU AI Rules?
Healthcare represents one of the most complex sectors for AI regulation because medical devices must meet both AI governance requirements and existing medical device regulations simultaneously. The European Council and European Parliament have already adopted their positions on the Digital Omnibus reforms, and the process is now entering trilogue negotiations, where representatives from both bodies will hammer out the final text. This timing matters because healthcare companies need clarity before deploying AI-powered diagnostic tools, treatment recommendations, and patient monitoring systems across the EU.
The challenge is that the EU AI Act was designed as a horizontal framework covering all sectors, but healthcare has its own regulatory ecosystem. MedTech Europe's response to the European Commission's consultation on simplifying AI rules highlights a fundamental tension: companies must navigate overlapping requirements from the AI Act, the Medical Device Regulation (MDR), and the In Vitro Diagnostic Regulation (IVDR), often without clear guidance on how these frameworks interact.
What Specific Changes Are Healthcare Regulators Requesting?
Beyond MedTech Europe's input, data protection authorities across the EU have also weighed in. The European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) issued a joint opinion on the proposed European Biotech Act, emphasizing the need for clearer safeguards when health and genetic data are used in biotech and AI contexts. Their concerns center on three key areas:
- Harmonized Legal Bases: Different EU member states apply different legal justifications for processing clinical data, creating fragmentation that makes it difficult for companies to deploy AI systems across borders.
- Genetic Data Protections: The EDPB and EDPS want stronger, more explicit protections when genetic information is used to train or validate AI models, given the sensitive nature of this data.
- Clinical Data Governance: Clearer rules are needed for how clinical trial data, patient records, and real-world evidence can be used to develop and improve AI medical devices without violating privacy rights.
These requests signal that the EU's regulatory approach, while comprehensive, may be creating unintended friction for the very sectors where AI could deliver the most benefit to patients.
How Are Other Regions Approaching Healthcare AI Regulation?
The UK is taking a notably different approach. The Medicines and Healthcare products Regulatory Agency (MHRA) has secured multi-year funding to expand its AI Airlock Program, which supports the development of more ambitious AI medical devices by providing a structured pathway for innovation. This program allows companies to test and refine AI systems in a controlled environment before full regulatory review, reducing time to market while maintaining safety oversight.
Meanwhile, a new parliamentary inquiry in the UK is examining barriers to National Health Service (NHS) adoption of AI and personalized medicine technologies. The inquiry is focusing on procurement challenges, digital infrastructure limitations, and system fragmentation, suggesting that the UK sees regulatory clarity as only part of the solution; healthcare systems also need investment in the infrastructure to actually deploy these tools.
In the United States, the FDA has taken a firmer stance on AI oversight. On April 1, 2026, the FDA rejected a petition from Harrison.ai that sought to exempt certain AI-powered radiology devices from premarket review requirements. The FDA concluded that prior clearances do not demonstrate sufficient "proficiency in processes" to justify exempting future devices, noting that AI development varies significantly across different medical indications, imaging modalities, and device types. Instead, the FDA pointed manufacturers toward Predetermined Change Control Plans as a more appropriate pathway to reduce regulatory burden while maintaining oversight.
Steps to Navigate EU AI Compliance in Healthcare
For healthcare organizations and medtech companies operating in Europe, the current regulatory environment requires a proactive approach:
- Map Overlapping Requirements: Document how the EU AI Act, Medical Device Regulation, and In Vitro Diagnostic Regulation apply to your specific AI system, and identify gaps or conflicts early in development.
- Engage with Regulators Early: Participate in pre-submission meetings with national competent authorities to clarify expectations before investing heavily in compliance infrastructure.
- Plan for Extended Timelines: Given that MedTech Europe is calling for extended implementation timelines, budget additional time for compliance activities and regulatory review, rather than assuming the original deadlines will hold.
- Document Data Governance: Establish clear policies for how clinical, genetic, and patient data are collected, processed, and used in AI model development, with explicit attention to the legal basis for processing under GDPR and emerging biotech regulations.
- Monitor Trilogue Negotiations: Follow the ongoing Digital Omnibus trilogue discussions, as changes to the AI Act's final text could affect your compliance obligations.
What Does This Mean for the Broader AI Regulation Debate?
The healthcare sector's push for clearer EU AI rules reflects a broader tension in global AI governance. While there is near-universal agreement among policymakers that AI regulation is necessary, the practical implementation of those rules is proving more complex than anticipated. A recent survey of 301 public policy experts across the United States and Europe found that 92% of U.S. policy insiders and 70% of European policy insiders support stronger AI regulation. However, the same survey revealed significant differences in how urgently each region views the problem, with 41% of U.S. policy experts believing AI poses an existential threat to humanity, compared to 29% of European experts.
"What makes these findings so significant is who is saying it. These are the practitioners who work inside the policy process every day, spanning every corner of the policy world from defense to healthcare to finance, not activists or everyday citizens," said William Stewart, President and Founder of Povaddo, the firm that conducted the survey.
William Stewart, President and Founder of Povaddo
The EU's introduction of the AI Act, the world's first comprehensive AI regulatory framework, may explain why European policy experts express comparatively lower levels of concern across several survey questions. Having a regulatory framework in place appears to reduce anxiety among policymakers, even if that framework is still being refined and simplified.
The healthcare sector's experience suggests that the next phase of AI regulation will be less about broad principles and more about sectoral specificity. As the EU's Digital Omnibus trilogue negotiations proceed, the feedback from MedTech Europe, the EDPB, and the EDPS will likely shape how the final AI Act text addresses the intersection of AI governance and existing regulatory regimes. For healthcare companies, the message is clear: engage with regulators now, document your compliance approach, and prepare for a longer implementation timeline than originally anticipated.