Explicability as the foundation of ethical AI.

Everyone seems to be talking about trustworthy AI these days. Yet it isn’t always clear what that is supposed to mean.

Be it policy makers or developers of AI systems, everyone seems to be talking about responsible or trustworthy AI these days. Yet it isn’t always clear what these terms are supposed to mean.

Cutting through the definitional haze, trustworthy AI consists of three components: It ought to be legal (i.e., comply with existing laws and regulations), ethical (i.e., it ought to adhere to ethical principles and values), and robust (i.e., safeguards must be in place to prevent unintended adverse effects).

The legal landscape around AI is undergoing some profound transformations lately, which I have written about in The European approach: Towards trustworthy AI. But having a legal framework is not enough. Laws often lag behind technological developments (especially in rapidly evolving fields such as AI), they can be in conflict with ethical norms, and due to their fixed and codified nature, they may not be suited to address certain issues. This is where ethics takes over.

Ethical AI: Where would you start looking for solutions in a nearly infinite space?

Let’s go through some of the demands of truly ethical AI.

Four standard ethical AI principles:

In practice, when we are trying to tackle ethical issues in a specific domain, the standard approach is to invoke a set of relevant ethical principles. For example, in bioethics (the study of ethical issues arising from advances in biology, medicine, and biotechnologies, and the most established field within applied ethics) four principles are canonically invoked:

  1. Beneficence: To promote well-being and the common good, preserve human dignity, and sustain the planet.
  2. Non-maleficence: To prevent harm, including to the privacy of individuals.
  3. Respect for human autonomy: Individuals have a right to make decisions for themselves, including to decide whether to decide for themselves or to delegate decision-making power.
  4. Justice: Promoting prosperity, preserving solidarity, a fair distribution of resources and opportunities (e.g., in the form of social support systems), and avoiding unfair outcomes and discrimination (e.g., due to bias).

Meta-analyses of the vast array of ethical guidelines on AI published within recent years show a striking convergence on principles like these in the ethics of AI. However, a further and absolutely essential principle has also emerged from these discussions: Explicability.

Explicability: What it is, and why it is important?

A key observation with regards to most contemporary AI systems is that their workings are largely invisible or unintelligible to all but a few experts – at best. It is therefore now recognized that for AI technologies to be developed and deployed in an ethical manner, an additional principle needs to be added to the list: Explicability. According to the influential ethical guidelines proposed by the AI4People network, explicability can be understood as having two components:

  1. Intelligibility, i.e., an answer to the question “How does the system work?”
  2. Accountability, i.e., an answer to the question “Who is responsible for the way it works?”

First, explicability is important to build and maintain trust in AI systems.

Second, explicability is a precondition for observing the four ethical principles of beneficence, non-maleficence, respect for human autonomy, and justice. To ensure the system promotes well-being and prevents harm, we need to be able to understand the good and harm the system is capable of bringing upon society and how this happens. To ensure the system respects human autonomy, it is crucial that when we consider whether to delegate decision-making to the system that we have adequate knowledge about how the system would act in our place. And finally, to make sure the system is just, we need to ensure that developers and deployers can be held accountable in the event of negative outcomes, which in turn requires that we are able to understand why the outcome occurred.

In their Ethics Guidelines for Trustworthy AI, the European Commission’s High-Level Expert Group on AI echoes the centrality of explicability for AI ethics, though their wording is slightly different. First, they argue that processes need to be transparent, i.e., the capabilities, limitations, and purpose of the AI system must be openly communicated. Second, to the extent possible, decisions must be explainable to those directly or indirectly affected by them. According to the High-Level Expert Group, these features are important because they make it possible to contest potentially flawed decisions and to build and maintain a culture of trust around AI.

More specifically, data sets, data gathering, data labelling, decision processes, and algorithms used should be documented to the best possible standard to make it possible to trace reasons for error and prevent future mistakes.

It must be possible for people to understand and trace decisions, and whenever the system has a significant impact on people’s lives, it should be possible for them to obtain a suitable explanation of the decision-making process. Organizations must be transparent about how the system influences decision-making processes in the organization and the rationale for using it in the ways that they do.

And finally, when it isn’t obvious, people ought to be made aware that they are interacting with an AI system and have the option to opt out of such an interaction. For these reasons, transparent and explainable AI is crucial to building a future where AI is both trustworthy and ethical.

Ethical AI: Reaping the benefits, avoiding the costs.

AI presents an array of amazing opportunities as well as some corresponding risks. For instance, AI4People presents the following list:

  1. AI can help us spend more of our lives more intelligently, but it could also devalue our skills and leave us feeling redundant.
  2. AI can enhance our abilities to do more, better, and faster, but it could also lead to decision-making we do not understand, have no control over, and where it is hard to hold anyone responsible in case of errors.
  3. AI can help us coordinate better, pursue more ambitious goals, and find new solutions to old and new problems, but we also risk delegating decisions to autonomous system that should have remained under human supervision leaving us unable to monitor performance and redress errors.
  4. And finally, AI can nudge us towards socially preferable outcomes, strengthen social cohesion, and bring us to collaborate on complex global problems, but those routines that might be meant to make our lives easier could also erode our ability to determine for ourselves and lead to some unintended and unwelcome changes to our behavior.

AI4People also points out that an ethical approach to AI has a dual advantage: If developed responsibly with ethical considerations in mind, we can take advantage of the value AI makes possible while at the same time minimizing the risks, even if these are legally unquestionable.

What we need are approaches to AI where we remain in control, where our ability to determine for ourselves is preserved, where our abilities are improved and multiplied, and where we are provided with new opportunities (for instance, by working alongside AI) as some skills become obsolete. To achieve this dual advantage, high levels of trust are required to make sure opportunities are not forgone out of fear of mistakes and it must be possible to trace responsibility in case of errors to manage risks.

As we have seen, transparent and explainable AI delivers the right mix of control, trustworthiness, and traceability to meet the marks of ethical AI. It is therefore also well-positioned to realize the potential that an ethical approach to AI holds.

Share some perspective.

Beyond the algorithm: The human impact of AI-driven RNA therapeutics.

Abzu's perspective on accelerating R&D with explainable AI.

We ditched the traditional swag option for a more personal and green approach: We encouraged Abzoids to bring their preferred textile – preferably pre-loved – to screen-print our logo on.
Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
In the life sciences sector, many AI applications fall under the high-risk category.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.