Doctors Are Now Building Their Own AI Tools. Here's Why Healthcare Systems Need to Prepare Now.

Physicians are increasingly using AI coding assistants like Claude Code to build custom clinical applications directly, bypassing traditional software development teams. This shift represents a fundamental change in how healthcare systems develop technology, but it also introduces significant security and regulatory challenges that institutions must address immediately.

Why Are Doctors Building Their Own Software?

The motivation is straightforward: healthcare systems move slowly. When physicians identify workflow problems or patient care gaps, waiting for IT departments and external vendors to develop solutions can take months or years. Claude Code, an agentic coding assistant that reads codebases, edits files, runs commands, and integrates with development tools, enables clinicians to prototype and deploy solutions themselves.

During a webinar hosted by Anthropic, emergency medicine physician Dr. Graham Walker, cofounder of clinical decision tool MDCalc, posed a provocative question to health systems: "Why aren't we letting our physicians build these tools?". Walker and interventional cardiologist Dr. Michał Nedoszytko, who won third place at Anthropic's hackathon earlier this year, demonstrated how clinicians without extensive coding backgrounds can use Claude's Opus 4.7 and Sonnet 4.6 models to create functional clinical applications.

"If the EHR is a problem, maybe just create your own," said Dr. Michał Nedoszytko, interventional cardiologist and AI developer.

Dr. Michał Nedoszytko, Interventional Cardiologist and AI Developer

Nedoszytko's point reflects a real frustration in healthcare: electronic health record (EHR) systems often don't adapt to specific clinical workflows. When doctors can build their own tools, institutional change accelerates dramatically.

What Are the Security Risks?

The enthusiasm for physician-led development comes with a critical caveat: security vulnerabilities. Security experts have raised alarms that novice developers using AI coding tools may inadvertently introduce serious defects that could expose patient data or compromise system integrity.

Dave Kennedy, CEO of security firm TrustedSec and former NSA analyst, told Forbes that novice developers won't spot flaws, "introducing serious defects." He called the situation "very alarming". These concerns intensified after Anthropic announced Claude Mythos, its latest frontier model, which appears capable of detecting system vulnerabilities.

Dave Kennedy, CEO of security firm TrustedSec and former NSA analyst

The timing is critical. On November 14, 2025, Anthropic disclosed the first AI-orchestrated espionage campaign, where Chinese state-sponsored groups used Claude Code to autonomously run full attack chains across approximately 30 global targets. This real-world example demonstrates that AI coding tools can be weaponized at scale, making healthcare systems particularly vulnerable if they deploy physician-built applications without proper security review.

How Should Healthcare Systems Prepare?

Experts and industry leaders have outlined specific steps healthcare organizations should take to safely enable physician-led development while protecting patient data and system integrity.

  • Security Audits: All physician-built applications must undergo professional security reviews before deployment. Nedoszytko emphasized that while tools can be created on a personal computer, running them with live patient data requires institutional engineering oversight and compliance validation.
  • HIPAA Compliance Integration: Anthropic is developing regulatory plug-ins beyond the current HIPAA audit skill. Using the HIPAA compliance audit review skill could make compliance audits faster and less costly, according to Dr. Walker.
  • 90-Day Preparedness Plans: The Cloud Security Alliance released a whitepaper titled "The 'AI Vulnerability Storm': Building a 'Mythos-ready' Security Program" recommending that every organization develop a 90-day preparedness plan. The paper advises turning agents and LLM capabilities inward on your own code and dependencies, starting immediately by asking an agent for a security review of any code, then building toward a full audit within your CI/CD pipeline.
  • Professional Engineering Oversight: Physicians building tools with Claude Code still need engineers for production-ready code. Nedoszytko noted that institutional deployment of patient-facing tools requires collaboration between clinicians and professional developers.

The Cloud Security Alliance's whitepaper, authored by prominent cybersecurity leaders including Jen Easterly (CEO of RSAC and former director of the U.S. Cybersecurity and Infrastructure Security Agency), Chris Inglis (former National Cyber Director at The White House), and Rob Joyce (former NSA cybersecurity director), emphasizes that "AI-driven vulnerability discovery and exploit development have accelerated dramatically". The time between disclosure and exploitation is shrinking, requiring security teams to respond faster than current operating models allow.

What Does This Mean for Healthcare IT Leaders?

The shift toward physician-led development using AI tools is not a question of "if" but "when" for most health systems. Daisy Hollman, a developer on Anthropic's Claude Code team, noted that the company is actively working on regulatory plug-ins to support healthcare compliance. Last month, Anthropic released Claude Code Security to scan codebases and suggest patches, which the Cloud Security Alliance recommends as part of a comprehensive security strategy.

"This always needs to be run through your team," said Dr. Michał Nedoszytko, regarding the deployment of physician-built tools with live patient data.

Dr. Michał Nedoszytko, Interventional Cardiologist and AI Developer

Radi El Haj, CEO of payments company RS2, told Healthcare IT News that the broader implication extends beyond any single model: "Ultimately, this is less about a single model and more about a structural shift in how cyber risk is discovered, understood and managed. As AI continues to accelerate both insight and threat, the institutions that succeed will be those that treat cybersecurity not as a function, but as a core component of resilience and trust".

El Haj, CEO of payments company RS2

Healthcare systems that embrace physician-led development while implementing robust security frameworks will likely gain competitive advantages in speed and innovation. Those that ignore this trend risk falling behind as clinicians find workarounds or migrate to less regulated environments. The key is not to block physician innovation, but to channel it safely through proper governance, security review, and professional engineering support.