Artificial intelligence is transforming healthcare, but legal and ethical guardrails are still catching up. AI’s potential is enormous, yet so are the risks—especially in an industry where regulation is the norm. Dr. Danny Tobey, a physician-lawyer and AI expert at global law firm DLA Piper, says health leaders must strike a balance between innovation and compliance or risk falling behind.
The Patchwork of AI Regulations in Healthcare
Right now, AI regulation in healthcare is a tangled web of voluntary guidelines, federal mandates, and state policies. Agencies like the FDA and HHS have issued guidance, but there’s no single rulebook. Instead, hospitals and companies find themselves navigating a mix of:
- Federal regulations from agencies like the Food and Drug Administration (FDA) and Department of Health and Human Services (HHS).
- State laws, often led by attorneys general, targeting AI’s role in patient care and data protection.
- Lawsuits testing the boundaries of liability for AI-driven decisions in healthcare settings.
Some experts fear a “Wild West” scenario, where hospitals adopt AI tools without enough oversight. Others say the problem isn’t too little regulation—it’s too much. With conflicting guidelines and policies, healthcare leaders often struggle to know what’s allowed, what’s risky, and what’s outright illegal.
AI Litigation Risks: What’s Landing in Court?
Lawsuits over AI in healthcare are already happening, and the stakes are high. One of the biggest risks? The very thing that makes AI powerful—its ability to generate creative, non-deterministic responses—also makes it unpredictable. That unpredictability has led to several legal challenges:
- Hallucinations and misinformation: AI can fabricate data or misinterpret medical information, leading to potential malpractice claims.
- Algorithmic discrimination: AI models trained on biased data sets can reinforce disparities in healthcare delivery, opening hospitals up to discrimination lawsuits.
- Lack of transparency: Many AI systems operate as “black boxes,” making it difficult for providers to explain AI-driven decisions to patients or regulators.
DLA Piper has already defended lawsuits over “black box” AI in insurance and generative AI errors. These cases signal a growing trend: as AI becomes more embedded in healthcare, legal battles will follow.
Build or Buy? The AI Adoption Dilemma
Health systems looking to implement AI face a key question: should they build their own tools or buy existing ones? Neither option is risk-free.
- In-house AI development: This allows customization for a hospital’s specific needs but requires significant investment in expertise, testing, and governance.
- Vendor AI solutions: These are easier to deploy but may not be tailored for a hospital’s patient population, leading to potential errors.
Dr. Tobey warns that off-the-shelf AI solutions can be just as risky as homegrown ones. Without proper oversight, an AI model trained on one population may produce inaccurate results in another. The safest approach? No matter where the AI comes from, governance and compliance must be built into its deployment.
Common Pitfalls When Hospitals Develop AI
Healthcare AI isn’t a one-size-fits-all solution. Even well-intentioned efforts can go wrong if hospitals fail to address key issues.
Some of the biggest mistakes?
- Relying on AI’s confidence instead of verifying its accuracy. Generative AI doesn’t just answer questions—it answers confidently, even when wrong.
- Ignoring the need for regular updates. AI trained on outdated data can produce flawed recommendations.
- Lack of clear limitations. AI shouldn’t make clinical decisions, only assist providers in making informed choices.
AI’s ability to democratize healthcare is promising, but without safeguards, hospitals could be introducing new risks instead of solving old ones.
The Surprising Cost of AI Governance
Many hospital leaders underestimate the cost of AI governance. While AI tools can be relatively cheap to adopt, ensuring they’re safe and legally compliant is another story.
Tobey notes that hospitals can spin up AI models quickly using open-source tools. The problem? Just because an AI is easy to implement doesn’t mean it’s risk-free. Proper governance—including liability protection and compliance measures—can be more expensive than the AI itself.
That’s why experts push for a proactive approach. Instead of waiting for problems to arise, hospitals should invest in oversight from the start.
What a Strong AI Governance Framework Looks Like
For hospitals to use AI safely, governance needs to be more than just a checkbox. Tobey outlines four key pillars that separate well-managed AI from risky deployments:
- Leadership buy-in. Senior executives and board members must take AI governance seriously, not treat it as an afterthought.
- Dedicated funding. AI safety requires a real budget, not just an IT department side project.
- Cross-functional oversight. Doctors, data scientists, lawyers, and compliance officers all need a seat at the table.
- Continuous testing. AI isn’t “set and forget”—it must be monitored to catch problems before they escalate.
Because AI can impact thousands, even millions, of patients at once, hospitals can’t afford to take a reactive stance. Ensuring AI safety from day one isn’t just a legal requirement—it’s a matter of patient trust.