Navigating the Legal Landscape of AI in Healthcare

Building AI for healthcare isn’t (just) a technical challenge – it’s a regulatory and ethical discipline in itself.

My perspective: Medical law is a living discipline. The legal framework is interpreted through practice – it’s not as simple as following a checklist to get it right.

In my previous role as chief psychiatrist, I appeared weekly in administrative court to defend my decisions before judges and attorneys. That’s how legal precedent is shaped: what constitutes a reasonable interpretation of the law on involuntary psychiatric care, what solutions are permissible, and which fall outside the lawmakers’ intent. Balancing patient autonomy with protection of life and health requires continuous legal interpretation.

Insights for developers: Planning to build an LLM to support diagnosis or clinical assessment? It’s not enough that your model works – you also need to navigate three parallel (and sometimes conflicting) regulatory frameworks:

  1. EU AI Act – classifies AI used in healthcare as high-risk. This requires transparency, bias control, human oversight, and documentation of the model’s purpose and limitations.

  2. MDR (Medical Device Regulation) – treats software that influences medical decisions as a medical device. This brings requirements like clinical evidence, a quality management system (ISO 13485), risk management (ISO 14971), CE marking, and post-market surveillance.

  3. GDPR – requires a legal basis for data processing, data minimization, privacy-by-design, and a clear division of responsibilities between the data controller (usually the healthcare provider) and the AI provider (as data processor).

These frameworks overlap – and sometimes collide. One example: the AI Act demands transparency in the model’s decisions, while GDPR requires minimizing the amount of data processed. Finding this balance means integrating the regulations from day one – not as an afterthought.

Tips for AI developers in healthcare:

• Define early what your model is supposed to do (indicate? suggest? diagnose?). This determines its classification and regulatory path.
• Plan for clinical validation from the start – or you’ll lack the evidence needed for CE marking.
• Understand that explainability, bias testing, and data governance are not side tasks – they are core components of your product.

At every step of development, testing, and deployment, your team needs to understand the legislators’ intent. When the regulations collide, there’s no predefined answer – your team’s insight is key to making sound decisions.

Document your reasoning. Keep a “development journal” of key discussions and trade-offs made along the way.

Finally: It’s not “humans-in-the-loop” that makes AI safe in healthcare – it’s building with responsibility from the very beginning.

Many thanks to Zahoor Ul Islam for excellent insights.


/Markus Boman

Föregående
Föregående

Så byggde vi vår egen LLM

Nästa
Nästa

Så inför vi AI i vården – på riktigt