Casper Wilstrup.

Casper is the founder and CEO of Abzu, inventor of the QLattice® AI algorithm, and an expert on symbolic AI.

A human-centric AI inventor and an expert on symbolic AI.

Casper Wilstrup is the founder and CEO of Abzu. Casper is the inventor of the QLattice® artificial intelligence (AI) algorithm and has over 20+ years of experience building large scale systems for data processing, analysis, and AI.

He is passionate about the impact of AI on science and research and the intersection of AI with philosophy and ethics.

Casper Wilstrup speaking at SLUSH 2021

Casper Wilstrup speaking at the Nordic Showcase at SLUSH 2021 on accelerating scientific discoveries with symbolic AI.

A distinguished public speaker in both English and Danish, Casper has been on-stage at events (e.g., SLUSH, TechBBQ, Nordic Fintech Week, and GOTO), interviewed in podcasts (e.g., The Healthtech Podcast, Techtopia, and Deep Tech Stories) featured in media (e.g., mandagmorgen, FINANS, and tech.eu) and published in research journals (e.g., The American Journal of Obstetrics and Gynecology and BMC Medical Informatics and Decision Making).

In addition to sharing his thoughts on symbolic AI and the ethics of AI use, Casper is also interested in discussing Abzu’s unique teal organizational structure. There are no bosses at Abzu, and each Abzoid sets their own salary and schedule.

Casper is also one of the only people in the world who is fluent in reading ancient Sumerian.

A letter by Casper Wilstrup:

"AI Without the Mystery: The Key to Safer Technology."

Abzu Illustration - AI ethics - Ethical AI use

In-demand topics for interviews, speaking engagements, and media placement:

Casper approaches the most pressing concerns and challenges in AI innovation and adoption — transparently — with a knowledgable and friendly flair.

Symbolic AI

AI regulation

AI ethics + safety

AI technology

AI Without the Mystery: The Key to Safer Technology.

A letter by Abzu CEO Casper Wilstrup.

We’re living in an era buzzing with artificial intelligence (AI) chat. There’s one big worry though. A lot of AI is like a magician’s trick – we see the result, but we don’t know how it’s done. It leaves us guessing.

We call these ‘black-box’ AIs. They give us great predictions, but they keep the ‘how’ a secret. This mystery makes it hard for scientists to use AI to come up with new ideas.

There is a bright spot. A field called Explainable AI (XAI) is trying to crack open these black boxes. Popular methods like SHAP and LIME are like detective tools. They snoop on the AI, helping us understand what’s going on inside.

Still, these tools only go so far. They watch AI behavior but don’t really get to the heart of how it’s done. So, we’re thinking of a new strategy, called Symbolic AI. It’s about making AIs we can understand from the get-go. Like a math problem, they show the whole working process, not just the final answer.

AI safety is big news. Famed AI researcher Eliezer Yudkowsky worries that unchecked AIs could be bad news for humans. Picture an AI so focused on its goal that it doesn’t care if it harms people to reach it. Scary, right? Yudkowsky says we need to make sure our AI has safety locks and follows our values.

But how do we make AI safe? We think the answer is simple. Let’s use AI to build understandable models that humans can check for safety. We stay in the driver’s seat this way.

Here’s our four-step plan:

  1. We build smart AI “scientists” to come up with new ideas.
  2. These AI “scientists” create models we can understand.
  3. We make sure these models work as they should.
  4. Finally, we use these models to make decisions in our AI systems.


This way, we get the best of both worlds – AI’s power and human control.

Now, let’s imagine what this could look like in real life.

AI doctors could use models to diagnose diseases and recommend treatments. AI cars could drive using models that represent traffic and road conditions. AI factory workers could use models to build things more efficiently. AI builders could use models to ensure buildings are safe and long-lasting.

AI is developing fast, and yes, it can be scary. But, it’s not going to stop. So, we need to make sure it’s safe. Our four-step process could be just the ticket. By focusing on clear models, we keep control and make sure AI sticks to our values.

Working hand-in-hand with AI in this way gives us the best shot at a safe, tech-powered future.

More from Casper:

Addressing in-demand themes in AI innovation and adoption.

Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
In the life sciences sector, many AI applications fall under the high-risk category.
This session provides an in-depth look at several critical aspects of the AI Act from the perspective of developers.
Dr. Andree Bates and Casper Wilstrup discuss how Abzu’s QLattice explainable AI algorithm can be used to accelerate disease understanding, drug design, and insights in Pharma R&D.
Deep learning works like half a brain – without the other half, AI cannot meet growing expectations.
Lyt til Casper Wilstrup, CEO i Abzu, der ser, at next step er, at AI går fra et hjælpemiddel til direkte at overtage det menneskelige arbejde.
A discussion on AI, science, and philosophy, and Casper explains the symbolic AI behind Abzu's proprietary QLattice®.
Symbolic AI works with less data, and is compatible with even the highest demands on transparency and explainability.

Contact Casper.​

Ask a question, or book Casper for an interview, speaking engagement, or media placement.

Contact us