The European approach: Towards trustworthy AI

Something is brewing in Europe. Is it an ecosystem of excellence?

Europe is spearheading trustworthy AI

Something is brewing in Europe. As the hype around AI is sweeping the world, a call for ethical guidelines and regulation has started to emerge. Nowhere is this clearer than in Europe, which is spearheading the attempt to design a legal framework that ensures trustworthy AI.

In a nutshell: Trustworthy AI means mitigating risks to people’s health, safety, and fundamental rights and respecting ethical values such as human autonomy, prevention of harm, fairness, and explicability. [1]

The EU believes in the massive potential of AI to contribute to society, increase human well-being, and boost research and innovation. The EU’s bold new standards are likely to impact far beyond its own borders and change the rules of the game in AI significantly. But before we turn to Europe’s new approach to AI, it helps to understand why a new approach is needed. What are the risks posed by AI? And why are existing laws not cutting it?

The risks of AI

The risk-inducing features of current AI systems are their opacity, complexity, dependency on data, and their more-or-less autonomous behavior. Among the potentially adverse consequences of using AI systems are:

  • Unjustified actions due to inconclusive analyses,
  • Inscrutable decisions due to opaque algorithms,
  • Bias due to misguided evidence,
  • Unfair, discriminatory outcomes,
  • Problems with tracing responsibility for decisions, and
  • Transformative effects on people’s behavior and society lead to challenges for human autonomy and privacy.

We see that the risks of AI systems are at least partly due to the quality of the data they use. Stringent rules on the use of data are already in place in the EU.

Does that mean that existing data regulation might serve as a regulation of AI? Probably not.

The second potentially AI-relevant item in the GDPR is the concept of Privacy by Design. According to the GDPR, data controllers ought to take data protection into account during the design process by including measures like pseudonymization and data minimization such that “by default, only personal data which are necessary for each specific purpose of the processing are processed” (Article 25).

However, many of the issues raised by AI go beyond personal data issues and include, for instance, the protection of fundamental rights and non-discrimination. Therefore, data privacy protection is insufficient to address the concerns surrounding AI technologies.

Something more is needed.

The status quo: Current data protection as AI regulation?

Data regulation in the EU derives first and foremost from the General Data Protection Regulation (GDPR), which took effect in 2018. It provides individuals with control and rights over their data and lays out the regulatory regime by which companies must abide. Yet it also contains two items that are potentially relevant to the regulation of AI. The first stems from the following passages:

When an individual is subject to “a decision based solely on automated processing” that “produces legal effects […] or similarly significantly affects him or her”, the GDPR grants the individual the right to request “meaningful information about the logic involved” (Articles 15 and 22).

This has been interpreted as a right to explanation for the person subjected to an automatic decision-making process.

However, the exact interpretation of these passages is debated, and it remains unclear how to enforce them. They therefore do not seem to provide any real protection against the risks AI can pose.

The second potentially AI-relevant item in the GDPR is the concept of Privacy by Design. According to the GDPR, data controllers ought to take data protection into account during the design process by including measures like pseudonymization and data minimization such that “by default, only personal data which are necessary for each specific purpose of the processing are processed” (Article 25).

However, many of the issues raised by AI go beyond personal data issues and include, for instance, the protection of fundamental rights and non-discrimination. The protection of data privacy is therefore not sufficient to address the concerns surrounding AI technologies.

Something more is needed.

The Artificial Intelligence Act: Creating an ecosystem of trust

In April 2021 the EU Commission published a proposal for an Artificial Intelligence Act (henceforth the AI Act) which is the first major attempt anywhere in the world to deliver a comprehensive legal framework for the regulation of AI. A key objective is to create an ecosystem of trust. The primary means to achieve this is to regulate high-risk uses of AI (i.e., systems that pose significant risks to health, safety, or fundamental rights). Providers of such systems must conduct a conformity assessment proving that the following requirements have been met:

  1. Adequate risk assessment and mitigation systems.
  2. High quality of the datasets feeding the system to minimize risks and discriminatory outcomes.
  3. Logging of activity to ensure traceability of results.
  4. Detailed technical documentation providing all information necessary on the system and its purpose to assess its compliance.
  5. Clear and adequate information to the user.
  6. Appropriate human oversight measures to minimize risk.
  7. High level of robustness, security, and accuracy.

Moreover, the proposal prohibits systems that involve unacceptable risks, i.e., social scoring, subliminal manipulation of behavior [2], real-time remote biometric identification, and systems that exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability, or socio-economic status. [3]

The proposal also includes transparency obligations for certain systems, specifically those that (i) interact with humans (e.g., chatbots), (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). Roughly, these are systems that either try to imitate people or to manipulate or deceive them. Providers of such systems are obliged to disclose to users that they are interacting with such systems or exposed to manipulated content.

Finally, the EU wants to encourage providers of non-high-risk systems to adopt codes of conduct, i.e., to voluntarily adopt the same standards required of high-risk systems. Those who chose to do so will be awarded quality labels that signal the high standards and trustworthiness of their product or service.

What are the expected benefits of such a relatively stringent regulation?

First, by protecting the health, safety, and fundamental rights of EU citizens, the Union hopes to create an environment where people have high levels of trust in AI technologies. This should, in turn, increase the uptake of these technologies as people are willing to accept an extended use of AI.

Second, adopting harmonized standards across the EU means that once a product has been approved, it can be placed on the entire single market.

Compared to an environment where each member state adopts different rules, this creates much more legal certainty for providers and facilitates greater investment and innovation in AI in Europe.

The Commission’s proposal is now being scrutinized by the EU Council and EU Parliament but is not expected to change greatly.

Once the AI Act is passed into law – which might take several more years – it will take another 2 years before it takes effect. If the history of the GDPR is any guide, it could take up to 5 more years before the law finally takes effect.

During this time, the EU and member states will begin to establish the necessary bodies to implement the new regulation.

Yet now that a concrete proposal is on the table, it is becoming increasingly clear how AI is going to be regulated in the near future and how this could change the ways AI is developed and deployed. The AI Act puts a premium on trustworthy AI. A great way to achieve trustworthiness is by transitioning to explainable and transparent approaches to AI. Going forward, such AI systems should therefore increasingly be incorporated into the business and marketing strategies of providers and users of AI. Adopting this kind of technology is a way for businesses and organizations to future-proof themselves. And this is exactly the kind of technology that Abzu is already providing.

An ecosystem of excellence

The second goal of the European approach to AI is to create an ecosystem of excellence in AI. This is advanced through a series of initiatives aimed at stimulating innovation and investment in AI.

The EU also plans to strengthen the sharing of high-quality data and know-how across Europe and to build a better infrastructure of data and computing resources. As part of its ambition to accelerate its digital transition, the EU has launched initiatives to make available more of the unused data (e.g., from hospitals) and to incentivize businesses, organizations, and individuals to share more data in a trustworthy manner.

One of the ways in which this data will be made available is through so-called regulatory sandboxes, i.e., controlled environments that facilitate the development, testing, and validation of innovative AI systems. Regulatory sandboxes will be introduced with the AI Act and is an attempt to create a legal basis for the use of personal data when this in the public interest. This will happen under direct supervision and guidance of competent authorities.

An initiative that has already materialized is the Digital Innovation Hubs. These are one-stop shops that help companies respond to the digital challenges and become more competitive. They provide access to technical expertise, data spaces, testing and experimentation facilities, and offer innovation services such as financing advice, networking opportunities, and support on training and skill development. Here companies are also offered the opportunity to test a product before they decide to invest in it. All this is intended to help companies improve their products and services and become more efficient and innovative by incorporating more digital technologies in their enterprise.

Finally, the EU is taking steps to accelerate, act on, and align strategies towards excellence in AI across the Union and plans to invest massively in AI research and innovation. Further funding will go to establishing networks of AI excellence centers and to create the AI Lighthouse for Europe, i.e., an alliance of strong European research organizations that can strengthen excellence in basic and applied research in AI and attract and retain talent in Europe. The EU plans to invest at least €1 billion per year in AI from Horizon Europe and the Digital Europe programs in 2021-2027. €134 billion (or 20%) of the Recovery and Resilience Facility created to help Europe recover economically from the COVID-19 pandemic has been earmarked for the digital transition, including AI technologies. And finally, the EU aims to gradually increase public and private investment in AI to a total of €20 billion per year over the course of this decade.

Shaping global standards

The EU hopes that with its pioneering rules on AI it will be able to change standards globally towards trustworthy AI. This could happen through at least two routes:

First, there is the Brussels effect: Non-European providers of AI technologies might consider adopting the high European standards uniformly throughout their enterprise and even in non-European market since it might become economically, legally, or technically impractical to maintain lower standards in other markets. European standards could thereby spread around the globe driven by market mechanisms. Also, some foreign governments might choose to pass similar laws perhaps driven by an increased awareness of user protection and trustworthiness among their own citizens and policymakers. These effects have already been observed when the EU adopted high standards in other areas such as environmental protection, food safety, and data and privacy protection.

Another way the EU standards might exert global influence is through trade negotiations and other multilateral agreements. One indication of this is the establishment of the EU-US Trade and Technology Council, which had their first meeting in September 2021. The goal of this council is to work towards harmonization of standards, increased trade, investment, and innovation, and an enhanced computing infrastructure by strengthening EU-US collaborations within AI.

What all this shows is that the AI landscape is on the verge of another transformation – this time one driven by legal and ethical considerations.

Now that standards for trustworthy AI are about to be passed into law in Europe, the awareness of this issue will only accelerate. To meet these new standards, new approaches to AI are needed. Transparent and explainable AI is an especially effective way to achieve trustworthy AI. It therefore seems fair to say that transparent and explainable AI really is the technology of the future and that the time has come to embrace it.

Footnotes

[1] These ethical principles are listed in both the AI4People’s Ethical Framework for a Good AI Society and the EU’s High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI.

[2] This means outside the person’s awareness. Think of recommender systems designed to hold onto your attention for longer than you might wish.

[3] As part of their proposed amendments, the EU Council proposed to add socio-economic status to the list of exploitable vulnerabilities.

Share some perspective.

Beyond the algorithm: The human impact of AI-driven RNA therapeutics.

Abzu's perspective on accelerating R&D with explainable AI.

We ditched the traditional swag option for a more personal and green approach: We encouraged Abzoids to bring their preferred textile – preferably pre-loved – to screen-print our logo on.
Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
In the life sciences sector, many AI applications fall under the high-risk category.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.