What is explainable AI?

The answer is Abzu.

What is explainable AI? Is AI dangerous? Should there be regulation?

Abzu has answers.

“AI Without the Mystery: The Key to Safer Technology”

A letter by Abzu CEO Casper Wilstrup.

Abzu answers questions.

Abzu was founded in 2018 to build artificial intelligence that finds transparent and understandable answers to the world’s unanswered questions in science, business, and everyday life.

AI Q&A.

What is AI?

A quick history, including easy-to-understand definitions.

What is black-box AI?

Why the “how” and “why” behind a prediction is important.

Is AI dangerous?

How do we make AI safe? Is AI an existential threat?

Should AI be regulated?

And who should regulate it?

What is explainable AI?

Information about Abzu’s AI and applications.

Got a question?

Tell us what you want to know.

AI Without the Mystery: The Key to Safer Technology

A letter by Abzu CEO Casper Wilstrup.

We’re living in an era buzzing with artificial intelligence (AI) chat. There’s one big worry though. A lot of AI is like a magician’s trick – we see the result, but we don’t know how it’s done. It leaves us guessing.

We call these ‘black-box’ AIs. They give us great predictions, but they keep the ‘how’ a secret. This mystery makes it hard for scientists to use AI to come up with new ideas.

There is a bright spot. A field called Explainable AI (XAI) is trying to crack open these black boxes. Popular methods like SHAP and LIME are like detective tools. They snoop on the AI, helping us understand what’s going on inside.

Still, these tools only go so far. They watch AI behavior but don’t really get to the heart of how it’s done. So, we’re thinking of a new strategy, called Symbolic AI. It’s about making AIs we can understand from the get-go. Like a math problem, they show the whole working process, not just the final answer.

AI safety is big news. Famed AI researcher Eliezer Yudkowsky worries that unchecked AIs could be bad news for humans. Picture an AI so focused on its goal that it doesn’t care if it harms people to reach it. Scary, right? Yudkowsky says we need to make sure our AI has safety locks and follows our values.

But how do we make AI safe? We think the answer is simple. Let’s use AI to build understandable models that humans can check for safety. We stay in the driver’s seat this way.

Here’s our four-step plan:

  1. We build smart AI “scientists” to come up with new ideas.
  2. These AI “scientists” create models we can understand.
  3. We make sure these models work as they should.
  4. Finally, we use these models to make decisions in our AI systems.


This way, we get the best of both worlds – AI’s power and human control.

Now, let’s imagine what this could look like in real life.

AI doctors could use models to diagnose diseases and recommend treatments. AI cars could drive using models that represent traffic and road conditions. AI factory workers could use models to build things more efficiently. AI builders could use models to ensure buildings are safe and long-lasting.

AI is developing fast, and yes, it can be scary. But, it’s not going to stop. So, we need to make sure it’s safe. Our four-step process could be just the ticket. By focusing on clear models, we keep control and make sure AI sticks to our values.

Working hand-in-hand with AI in this way gives us the best shot at a safe, tech-powered future.

What is AI?

Despite its recent boom, AI is a relatively young field – and it has experienced its fair share of ups and downs.

Although it wasn’t called “artificial intelligence” at the time, the first mathematical computer model of a single neuron was developed in the early 1940s. This invention was promising, but a lack of computational power prevented even minor real-world application, which led to what is known as the first AI winter.

There was not much progress in the field of AI until thirty years later in the late 1970s. Neural networks – collections of connections of computer models of neurons – became possible by virtue of the first microprocessors. These were called “discovery systems” or “expert systems,” because they were trained to be experts in a particular field to usher in new discoveries. The original applications for discovery systems were, in fact, in life science: Their primary aims were to study hypothesis formation and discovery in molecular chemistry and disease diagnosis(1). But the limitations of the time – the expenses of data storage, hardware, and implementation – outweighed any benefits, which led to the second AI winter.

Today, with the advent of Big Data and more powerful processors, we are experiencing an AI boom. AI models are achieving impressive feats in areas such as gaming (mastering chess, Go, and then Jeopardy), science (predicting the 3D structures of protein), and automating human tasks (delivering packages, driving cars, and having conversations).

For all its short history of fewer than 100 years, AI has both disappointed and surpassed our expectations.

“Artificial intelligence” is a very broad term, which involves using computer systems to make predictions or automate tasks. When people talk about AI, they’re typically referring to what is called Artificial Narrow Intelligence (ANI).

The field of AI is rapidly evolving, and there are still two other categories of AI that don’t exist yet in reality: Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). These categories are characterized by their closeness to or ability to surpass human intelligence and performance.

ANI has several subfields that are often referred to interchangeably, but in reality they have very different use cases in pharmaceuticals. “Machine learning” involves the training of an algorithm with data to make a prediction, “deep learning” is a subfield of machine learning which has a depth requirement of greater than three neural networks, and a “neural network” is a set of algorithms meant to mimic the neurons in a human brain.

Generative AI, the form of artificial intelligence popularized by ChatGPT and DALL-E, although powerful, is still considered ANI. Generative AI is a deep learning algorithm that has been trained on massive amounts of data to accurately predict the next word or pixel.

Machine learning, deep learning, and neural networks are less like a set of Matryoshka dolls and more like a Venn diagram of algorithms. For example, deep learning is a kind of machine learning, but machine learning does not have to be achieved through neural networks. But memorizing these categories is not the crucial item. The most important aspect to understand and evaluate any AI model is how it achieves a prediction.

What is black-box AI?

Black-box AI describes a system where the internal workings — the “how” and “why” — aren’t transparent to the user.

You cannot examine the system’s code, logic, bias, weight, or logic to understand how it produced a prediction. The only thing you know is “data in, prediction out.”

Using black-box machine learning models – models that lack transparency and explainability, e.g. deep learning or complex decision trees – is problematic because predictions are not readily understood.

Not necessarily.

But the application of neural networks, often referred to as “black-box AI”, has a marred past of computational expense, bias, and an inability to validate. Most importantly, it’s the lack of explainability — especially concerning high-impact decisions — that makes black-box AI so perilous.

With black-box AI, there is no algorithmic accountability.

A prediction made by black-box AI is impossible to falsify because there is no insight into how it made its prediction and what it based its prediction on. The only thing you know is “data in, prediction out.”

Is AI dangerous?

At Abzu, we have a four-step plan:

1. We build smart AI “scientists” to come up with new ideas.
2. These AI “scientists” create models we can understand.
3. We make sure these models work as they should.
4. Finally, we use these models to make decisions in our AI systems.

This way, we get the best of both worlds: AI’s power and human control.

Should AI be regulated?

We’re working on a detailed proposal — stay tuned!

What is explainable AI?

Explainable AI, or white-box AI, describes a system where the internal workings — the “how” and “why” — are transparent to the user.

Abzu’s QLattice is explainable AI. Predictions are simple, mathematical equations, readily interpretable and explainable.

Because scientific research demands more than black-box AI, Abzu is building Abzu AI: An AI-guided exploration and prediction platform designed for scientific research.

Learn more about Abzu AI.

Ask an AI expert.

Abzu has answers.

Submit a question for us to address or add to our AI Q&A.

Need to interview or speak with an AI expert? Get in touch here.

AI expert Casper Wilstrup

CEO of Abzu

Casper is the founder and CEO of Abzu, the Danish/Spanish startup building AI and applications to realize undiscovered truths.

Casper is the inventor of the QLattice® symbolic AI algorithm. Casper has 20+ years of experience building large scale systems for data processing, analysis, and AI, and is passionate about the impact of AI on science and research and the intersection of AI with philosophy and ethics.

More thoughts from Casper.

Casper is passionate about the impact of AI on science and research and the intersection of AI with philosophy and ethics.
Abzu podcast - Casper Wilstrup on Deep Tech Stories

Part 2 with Casper Wilstrup: Self-management, transparency, and a new kind of AI to revolutionize science.

Learn more gray arrow
Abzu podcast - Casper Wilstrup on Deep Tech Stories

Part 1 with Casper Wilstrup: The path to inventing a new kind of AI.

Learn more gray arrow
Abzu podcast - Forward by Dawn 2

An in-depth intro to Abzu and our technology available today.

Learn more gray arrow
Abzu podcast - Techtopia

Den kunstige intelligens skal ud af den sorte box, så vi kan forstå, hvordan den er nået til sit resultat, og hvad vi kan bruge det til.

Learn more gray arrow
Abzu podcast - En verden af information

I samale med Fysiker og AI-virksomhedsgrundlægger Casper Skern Wilstrup ser vi på livets VIRKELIG store spørgsmål.

Learn more gray arrow
Casper speaking on Understanding the data we create

In under 2 mins: Why we have to understand what the decisions we make are based on and not blindly trust that a computer is right.

Learn more gray arrow
Abzu podcast - The Healthtech Podcast

Casper talks about how Abzu was founded, and everything healthcare and technology.

Learn more gray arrow
Casper Wilstrup speaking at TechBBQ 2021

A 17 minute video about Abzu’s origins and an impactful application of explainable AI in life science: diagnosing breast cancer mortality.

Learn more gray arrow

Who is Abzu?

Abzu is a scientific AI lab.

We build artificial intelligence that finds transparent and understandable answers to the world’s unanswered questions in science, business, and everyday life.

Awards and recognition.​

We are thrilled and honored to be recognized for our innovative and ethical technology and unique organization.

2023 winner “Best HealthTech” Nordics

Nordic Ethical AI Landscape member

EIC Accelerator grant winner

2022 Gartner® Cool Vendor™

2022 1st place winner “Synthetic Track”

2022 2nd place winner “Real-world Track”

Abzu AI is award-winning

Subscribe for notifications about Abzu AI.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.