The FDA Clearance Gap: Why Medical AI Devices Hide Their Foundation Model Architecture
Foundation models are quietly reshaping medical device approvals, but the FDA's regulatory language is masking the shift. As of May 2026, at least 14 FDA-cleared medical devices use or are built on foundation model architectures, yet only two devices explicitly mention "foundation model" in their official clearance summaries. This discrepancy reveals a regulatory strategy where manufacturers describe cutting-edge AI systems using familiar terminology to smooth the path to approval.
What Are Foundation Models in Medical Devices?
Foundation models represent a fundamental departure from how medical AI has traditionally worked. Historically, medical devices relied on narrow, single-task algorithms trained on highly curated datasets. A model trained to detect pneumothorax on a chest CT scan could not detect a rib fracture without being completely retrained from scratch. Foundation models change this paradigm entirely. These large-scale systems are pre-trained on vast amounts of unlabeled or loosely labeled data using self-supervised learning, a technique where the model learns patterns without explicit human labels. Once trained, they can be fine-tuned for multiple downstream tasks with significantly less labeled data and shorter development cycles.
Radiology has become the proving ground for this technology. The FDA has cleared over 1,500 AI-enabled medical devices, with radiology accounting for roughly 69% of the total. The transition from narrow AI to foundation models is now visible in clearance data, though often obscured by regulatory language.
Why Do Manufacturers Hide Foundation Model Architecture in FDA Filings?
The regulatory strategy is straightforward: manufacturers describe novel underlying architectures using established, familiar terminology to smooth the path to substantial equivalence, the FDA standard that allows faster approval for devices similar to existing cleared products. When Aidoc received FDA clearance for a rib fracture triage tool in December 2024, the company later announced it as the first clearance derived from its CARE1 Foundation Model, yet the official 510(k) summary simply described it as a "deep learning algorithm" trained on labeled images. This discrepancy highlights how regulatory language can lag behind technological reality.
The strategy works because the FDA's 510(k) process prioritizes a clear, bounded evaluation of safety and effectiveness rather than a sprawling technical treatise on underlying architecture. By presenting the device as a conventional deep learning system, manufacturers reduce regulatory friction and accelerate time to market.
How to Identify Foundation Models in FDA Clearance Data
- Tier 1 Evidence: Explicit "foundation model" language appears in official 510(k) decision summaries. Only two devices meet this threshold as of May 2026: Aidoc's BriefCase-Triage CARE Multi-triage CT Body (K252970, cleared January 7, 2026) and Aidoc's BriefCase-Triage CARE Multi-Triage CT (K253578, cleared February 26, 2026), both of which explicitly state the device uses a "foundation model-based artificial intelligence system".
- Tier 2 Evidence: Explicit transformer or self-supervised architecture appears in summaries without the phrase "foundation model." Four devices fall into this category, including Qure.ai's qXR-Detect (K251934), which explicitly notes replacement of a CNN encoder with a Vision Transformer-based encoder, and Apple's Hypertension Notification Feature (K250507), which describes use of self-supervised learning on PPG data.
- Tier 3 Evidence: Foundation model architecture is inferred from press releases, peer-reviewed research, or generalist capability descriptions. Eight devices fit this pattern, including RapidAI Enterprise Suite (K251151), where press releases explicitly state the platform is "underpinned by advanced foundation models trained on multimodal clinical data," even though the 510(k) summary uses generic language.
This three-tier classification system reveals the gap between what manufacturers announce publicly and what appears in regulatory filings. Aidoc's rib fracture clearance (K243548) from December 2024 is a prime example: the company announced it as powered by their CARE1 Foundation Model, but the 510(k) summary uses only generic "deep learning" language.
The architectural shift extends beyond what FDA databases suggest. Manufacturers frequently describe novel architectures using established "deep learning" terminology, a practice that smooths the path to substantial equivalence but obscures the true technological advancement. This regulatory strategy has become standard practice in the industry, creating a transparency gap between innovation and official documentation.
The implications are significant for regulators, clinicians, and patients. As foundation models become more prevalent in medical devices, the FDA may need to evolve its documentation standards to capture architectural details that could affect long-term safety, performance, and adaptability. For now, the 14 cleared devices represent just the beginning of a broader shift toward foundation model-based medical AI, one that regulatory language has yet to fully acknowledge.