How the EU AI Act Affects Clinical Trials


Contributing Expert: Ian Davison, Subject Matter Expert at Medrio

In June 2024, the European Parliament released the Artificial Intelligence (AI) Act. The EU AI Act touches many industries, including clinical research. As the state of clinical research evolves to include more AI, this law carries great importance.  

In this article, you will learn the: 

Wondering about the current state of AI?
Download our AI in Clinical Trials eBook to find out more. 

What Is the EU AI Act?

The EU AI Act aims to comprehensively regulate AI systems across industries in the European Union. Its goal is to provide governance while building trust and supporting the ethical use of AI. The act also aims to support innovation while minimizing risk. To this end, it outlines a risk classification system with four levels of AI systems, ranging from minimal to unacceptable. 

How is the EU AI Act Affected by Other EMA Guidance?

The EU AI Act does not remove the need for sponsors to comply with any relevant guidance provided by the EMA.

Point 64 of the Act highlights that medical devices using an AI system may pose risks not specifically addressed within the Act. Therefore, “this calls for a simultaneous and complementary application of the various legislative acts.”

Sponsors have some flexibility in ensuring compliance. For example, they may integrate necessary testing, reporting processes, and documentation required under the Act into existing documentation and procedures required under the EMA.

EMA regulatory guidance for artificial intelligence

While the use of AI in clinical trials is not yet specifically governed by the European Medicines Agency (EMA), they have begun to weigh in.

So far, they have published:
EMA: Reflection paper on the use of AI in the medicinal product lifecycle | July 2023 | Draft reflection paper
Artificial Intelligence Workplan | December 2023 | Workplan

Want to know more about AI-focused regulatory frameworks? Read our blog on AI global regulatory guidance.

When Will the EU AI Act Take Effect?

The EU AI Act will gradually take effect. It will apply two years after entry into force on 2 August 2026.

Important dates to know:

  • February 2, 2025: All prohibitions, definitions, and provisions related to AI literacy will apply on this date.
  • August 2, 2025: The rules on governance and the obligations for general-purpose AI will apply on this date.
  • August 2, 2026: The AI Act will apply two years after entry into force on this date.
  • August 2, 2027: The obligations for high-risk AI systems that are classified as high-risk because they are embedded in regulated products, listed in Annex II (list of Union harmonization legislation), come into effect on this date.
EU AI Act Important Dates

Looking for more information about upcoming dates? Check out the European Question and Answers webpage.

Risk Classification Within the EU AI Act for Clinical Research

The EU AI Act takes a risk-proportionate approach. This approach includes four levels of AI systems, ranging from minimal to unacceptable.

Each risk level has corresponding regulatory requirements. Any AI used within a clinical trial in the EU needs to comply with the new law.

The EU AI Act categorizes AI systems into four risk levels:

  • Unacceptable risk: AI systems that pose a clear threat to fundamental rights, such as real-time biometric surveillance, are banned.
  • High risk: AI in critical areas like healthcare, recruitment, and law enforcement faces strict compliance requirements, including transparency, data governance, and human oversight.
  • Limited risk: AI systems with potential for manipulation must disclose their use of AI to users.
  • Minimal risk: Many AI systems, such as recommendation algorithms and spam filters, face no specific regulations beyond existing laws.

Learn about what to ask vendors in this AI questions in clinical trials infographic.

How are clinical trials activities classified?

“High risk” is the highest acceptable risk level and may be subject to a more stringent set of requirements. There are several clinical trial-related AI systems that could fall into this category.

AI systems used in clinical trials will likely be considered “high risk” if they are part of:

  • Patient recruitment
  • Allocation of treatment
  • Diagnostics
  • Data management
  • Synthetic data generation
  • Decision-making 
  • Medical devices (Point 50 in EU AI Act)

What are the requirements for high-risk AI systems?

High-risk AI systems must adhere to rigorous standards for data quality, transparency, and human oversight. This includes comprehensive documentation and conformity assessments to verify compliance.

To meet EU AI Act’s mandatory requirements for trustworthiness, sponsors should consider: 

  • Data quality
  • Documentation and traceability
  • Transparency
  • Human oversight
  • Accuracy
  • Cybersecurity and robustness
European Commission Quote on High Risk AI systems

How the EU AI Act Impacts Data Management in Clinical Trials

The EU AI Act’s requirements may influence how people use AI in clinical trials, particularly data collection and analysis. 

When considering how to meet requirements, sponsors should focus on data transparency, data governance, and human oversight. Ensuring that AI systems meet standards will be important for clinical trial integrity.

Limited use of black-box AI 

The Act pushes for explainable AI. Therefore, AI models must provide reasoning behind their outputs. This requirement could challenge certain deep-learning models unless they offer sufficient transparency.

Bias mitigation

AI systems that generate or analyze trial data must meet stringent accuracy and fairness requirements. 

In a Q&A, lawmakers stated that AI systems must “not produce biased results, such as false positives or negatives, that disproportionately affect marginalised groups, including those based on racial or ethnic origin, sex, age, and other protected characteristics.”  

Bias mitigation will be key. It ensures AI doesn’t inadvertently skew patient selection or data interpretation.

When using AI systems in clinical trials, sponsors should help avoid bias by:

  • Training and testing models with “sufficiently representative datasets”
  • Ensuring systems are traceable and auditable
  • Keeping appropriate documentation, including the data used to train the algorithm 
  • Monitoring the system regularly
  • Addressing potential risks promptly 

An opportunity for innovation in clinical trials?

While regulation adds complexity, it also provides a clear framework that can foster innovation. 

For example, the AI Act supports the creation of regulatory sandboxes and real-world testing. This approach promotes innovation by allowing the use of a controlled environment to test innovative technologies for a limited time.

Prepare for the EU AI Act Today

Sponsors and CROs must understand and comply with the EU AI Act. Companies that proactively align their AI systems with these regulations may gain a competitive advantage in the EU market. Focusing on compliant, scalable AI solutions may drive efficiency in research.

In preparation for the full implementation of the EU AI Act, take a proactive approach. 

To do so, focus on auditing AI systems, improving documentation, and implementing bias mitigation strategies. These activities should already be familiar to the developers of clinical research systems since computer system validation has long been rigorously regulated.

How are today’s trials using AI?
Learn more in our The State of AI in Clinical Trials eBook.


European Artificial Intelligence Act Frequently Asked Questions (FAQ)

Common questions about the EU AI Act.

Will AI in clinical trials require new approvals under the EU AI Act? 

Yes, high-risk AI systems will need conformity assessments. Before deployment, ensure systems meet EU safety and ethical standards. AI integrated into medical devices must also align with existing medical regulations.

Does the EU AI Act apply to companies outside the EU?

Any company conducting clinical research in the EU or using AI that affects EU citizens must comply with the regulation, regardless of its headquarters.

What are the penalties for non-compliance with the EU AI Act?

Fines for violations can reach up to €35 million or 7% of global annual turnover, depending on the severity of the breach.


The blog does not constitute legal or other professional guidance. Please refer to proper industry documentation or known notified bodies for legal or professional advice.

Subscribe to our mailing list

Sign up to have our latest insights delivered to your inbox.

Related Resources

Enter a topic, term or keyword below:

Subscribe to our mailing list

Sign up to have our the latest insights delivered to your inbox.