Artificial intelligence is rapidly reshaping healthcare, but legal and ethical guardrails are still catching up. The industry’s heavily regulated nature doesn’t fully extend to AI, leaving hospitals, insurers, and pharmaceutical companies to navigate murky waters. While AI can drive efficiency and improve patient outcomes, the risks—ranging from algorithmic bias to legal liability—are impossible to ignore.
Dr. Danny Tobey, a physician and lawyer who co-chairs DLA Piper’s AI and data analytics practice, is at the forefront of these discussions. His firm has defended major cases involving AI “hallucinations” and algorithmic discrimination, making them a key player in shaping healthcare AI governance.
Regulations Are Coming—But Slowly
Right now, healthcare AI exists in a regulatory patchwork. Federal agencies like the FDA and HHS are issuing guidelines, while state-level regulators are taking their own approach. Lawsuits are beginning to set legal precedents, but no universal framework exists yet.
This has left healthcare executives in a difficult position. Too much regulation in some areas, not enough in others. “No one wants a Wild West situation,” Tobey explains, “but our clients often feel like they have both too much and too little guidance at the same time.”
For example, hospitals using AI-driven diagnostic tools must comply with FDA rules, while insurers deploying AI risk assessment models might face scrutiny from state attorneys general. The lack of uniformity makes compliance a challenge.
AI’s Legal Risks: What Healthcare Leaders Need to Know
Generative AI is particularly tricky. It doesn’t just analyze data; it creates new content, answers complex questions, and even generates medical summaries. But it also makes mistakes—sometimes confidently presenting false information, a phenomenon known as “hallucination.”
Here are some of the most pressing legal risks:
- Bias and Discrimination: AI models trained on incomplete or biased datasets can produce unfair outcomes, leading to lawsuits and regulatory fines.
- Medical Malpractice: If AI makes an incorrect diagnosis or recommendation, hospitals and providers could face legal consequences.
- Data Privacy Violations: AI systems handling patient data must comply with HIPAA and other privacy laws, but security breaches remain a major concern.
- Intellectual Property Issues: Who owns AI-generated medical insights? The hospital? The software vendor? This remains an open legal question.
Despite these challenges, avoiding AI altogether isn’t a solution. “There’s a risk in not adopting,” Tobey warns. “Throwing out the baby with the bathwater would be a big mistake.”
Build or Buy? Weighing the AI Options
Healthcare organizations face a key decision: should they develop AI tools in-house or purchase them from vendors? Each approach comes with its own risks.
- Developing AI Internally: Allows for customization but requires significant expertise and resources. Poorly built models can lead to unreliable results.
- Buying from Vendors: Offers a quicker solution but may not be tailored to a specific healthcare setting. Off-the-shelf models might not align with an organization’s workflows or patient population.
Tobey emphasizes that risk isn’t just about the source—governance matters more. “You can have just as much or more risk from an off-the-shelf solution that isn’t properly trained for your environment as you can from trying to build your own AI without the right resources.”
Governance Is More Than a Checkbox
A strong AI governance framework isn’t just about compliance—it’s about ensuring AI works safely and effectively in real-world healthcare settings. But many health executives underestimate the cost of responsible AI management.
“It’s easy to spin out thousands of use cases using open-source models,” Tobey explains. “But governance and liability protection require serious investment—sometimes more than the AI tools themselves.”
Four pillars of effective AI governance stand out:
- Leadership Commitment: AI safety and ethics must be priorities at the highest levels of hospital and health system leadership.
- Dedicated Budgeting: Governance isn’t free—health organizations must invest in oversight, legal reviews, and AI risk management.
- Multi-Stakeholder Oversight: AI governance should involve legal, clinical, technical, and ethical experts, not just data scientists.
- Rigorous Testing: AI should be tested early and often to prevent unintended consequences before they impact patient care.
What’s Next for Healthcare AI?
With AI adoption accelerating, regulatory clarity is expected to improve. New federal guidelines and legal precedents will shape how AI can be used in hospitals, clinics, and insurance companies.
Meanwhile, organizations that take AI governance seriously now will be in the best position to succeed. “The difference between safe, effective AI and risky AI isn’t just the technology itself,” Tobey says. “It’s the oversight, training, and safeguards behind it.”
For healthcare leaders, the message is clear: AI isn’t just a tool—it’s a responsibility. Those who embrace it wisely could transform patient care. Those who ignore it may find themselves left behind.