Navigating the EU AI Act in life sciences.

In the life sciences sector, many AI applications fall under the high-risk category.

This editorial was written for CataloniaBio & HealthTech.

CataloniaBio & HealthTech is the organisation that represents companies in the biomedicine and health sector in Catalonia.

Navigating the EU AI Act in life sciences.

After years of negotiations, the EU AI Act has finally become a reality. Initially a source of concern for potentially over-regulating AI research, this landmark legislation has instead focused primarily on AI use cases, a welcome relief for many in the field.

At the heart of the Act is the categorization of AI applications into distinct risk categories. It identifies some specific uses as either unacceptable risks, which are outright prohibited, or high-risk, which must meet stringent regulations. The majority of uses, however, are considered moderate or limited risk and are left essentially unregulated.

In the life sciences sector, many AI applications – such as those used in patient diagnosis, treatment recommendations, and drug development – fall under the high-risk category. These are areas where AI decisions have significant implications for human lives and will therefore be subject to heightened scrutiny.

I believe this approach is beneficial. AI systems making critical healthcare decisions must embody fairness, transparency, and explainability. As we integrate AI deeper into life sciences, these principles become the pillars that ensure that technological advancement aligns with human values and ethics. This is not just a regulatory obligation; it is also a moral one.

It’s equally important for the AI industry to innovate and fulfill the transformative promise of AI. This necessitates developing technologies that are transparent and interpretable, which will give us humans the understanding we require to trust in decisions made by AI. In the context of life sciences, where the stakes are exceptionally high, trust in a decision made by AI is only earned if we have the ability to understand “the why” behind that decision.

Therefore, we should only allow technology that ensures safety, fairness, and clarity in high-risk applications to benefit from our scientific, technical, and entrepreneurial communities in Europe. As the CEO of Abzu, an AI research and development startup that is now best in class in in silico RNA therapeutics design, I’ve seen firsthand how AI can drive innovation. And I’ve also seen areas – black, white, and gray – where people weighed a perceived tradeoff between explainability and innovation.

And I’m glad to say that such a tradeoff isn’t a reality: We can demand explainability in high-risk applications and not fall behind in the global AI race. In fact, this unique quality can leapfrog us to the front of the AI innovation line.

Share some perspective.

Beyond the algorithm: The human impact of AI.

Abzu's perspective on accelerating R&D with explainable AI.

We ditched the traditional swag option for a more personal and green approach: We encouraged Abzoids to bring their preferred textile – preferably pre-loved – to screen-print our logo on.
Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
Our proprietary technology and brilliant Abzoids ensure that each step that we help our partners take in their pipeline is actually a leap towards viable, in vivo success.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.