Casper Wilstrup.

Casper is the founder and CEO of Abzu, inventor of the QLattice® AI algorithm, and an expert on symbolic AI.

A human-centric AI inventor and an expert on symbolic AI.

Casper Wilstrup is the founder and CEO of Abzu. Casper is the inventor of the QLattice® artificial intelligence (AI) algorithm and has over 20+ years of experience building large scale systems for data processing, analysis, and AI.

He is passionate about the impact of AI on science and research and the intersection of AI with philosophy and ethics.

Casper Wilstrup speaking at SLUSH 2021

Casper Wilstrup speaking at the Nordic Showcase at SLUSH 2021 on accelerating scientific discoveries with symbolic AI.

A distinguished public speaker in both English and Danish, Casper has been on-stage at events (e.g., SLUSH, TechBBQ, Nordic Fintech Week, and GOTO), interviewed in podcasts (e.g., The Healthtech Podcast, Techtopia, and Deep Tech Stories) featured in media (e.g., mandagmorgen, FINANS, and and published in research journals (e.g., The American Journal of Obstetrics and Gynecology and BMC Medical Informatics and Decision Making).

In addition to sharing his thoughts on symbolic AI and the ethics of AI use, Casper is also interested in discussing Abzu’s unique teal organizational structure. There are no bosses at Abzu, and each Abzoid sets their own salary and schedule.

Casper is also one of the only people in the world who is fluent in reading ancient Sumerian.

A letter by Casper Wilstrup:

"AI Without the Mystery: The Key to Safer Technology."

Abzu Illustration - Hand reaching up

In-demand topics for interviews, speaking engagements, and media placement:

Casper approaches the most pressing concerns and challenges in AI innovation and adoption — transparently — with a knowledgable and friendly flair.

Symbolic AI

AI regulation

AI ethics + safety

AI technology

AI Without the Mystery: The Key to Safer Technology.

A letter by Abzu CEO Casper Wilstrup.

We’re living in an era buzzing with artificial intelligence (AI) chat. There’s one big worry though. A lot of AI is like a magician’s trick – we see the result, but we don’t know how it’s done. It leaves us guessing.

We call these ‘black-box’ AIs. They give us great predictions, but they keep the ‘how’ a secret. This mystery makes it hard for scientists to use AI to come up with new ideas.

There is a bright spot. A field called Explainable AI (XAI) is trying to crack open these black boxes. Popular methods like SHAP and LIME are like detective tools. They snoop on the AI, helping us understand what’s going on inside.

Still, these tools only go so far. They watch AI behavior but don’t really get to the heart of how it’s done. So, we’re thinking of a new strategy, called Symbolic AI. It’s about making AIs we can understand from the get-go. Like a math problem, they show the whole working process, not just the final answer.

AI safety is big news. Famed AI researcher Eliezer Yudkowsky worries that unchecked AIs could be bad news for humans. Picture an AI so focused on its goal that it doesn’t care if it harms people to reach it. Scary, right? Yudkowsky says we need to make sure our AI has safety locks and follows our values.

But how do we make AI safe? We think the answer is simple. Let’s use AI to build understandable models that humans can check for safety. We stay in the driver’s seat this way.

Here’s our four-step plan:

  1. We build smart AI “scientists” to come up with new ideas.
  2. These AI “scientists” create models we can understand.
  3. We make sure these models work as they should.
  4. Finally, we use these models to make decisions in our AI systems.

This way, we get the best of both worlds – AI’s power and human control.

Now, let’s imagine what this could look like in real life.

AI doctors could use models to diagnose diseases and recommend treatments. AI cars could drive using models that represent traffic and road conditions. AI factory workers could use models to build things more efficiently. AI builders could use models to ensure buildings are safe and long-lasting.

AI is developing fast, and yes, it can be scary. But, it’s not going to stop. So, we need to make sure it’s safe. Our four-step process could be just the ticket. By focusing on clear models, we keep control and make sure AI sticks to our values.

Working hand-in-hand with AI in this way gives us the best shot at a safe, tech-powered future.

More from Casper:

Addressing in-demand themes in AI innovation and adoption.

Abzu podcast - Casper Wilstrup on Deep Tech Stories

Part 2 with Casper Wilstrup: Self-management, transparency, and a new kind of AI to revolutionize science.

Learn more gray arrow
Abzu podcast - Casper Wilstrup on Deep Tech Stories

Part 1 with Casper Wilstrup: The path to inventing a new kind of AI.

Learn more gray arrow
Abzu podcast - Forward by Dawn 2

An in-depth intro to Abzu and our technology available today.

Learn more gray arrow
Abzu podcast - Techtopia

Den kunstige intelligens skal ud af den sorte box, så vi kan forstå, hvordan den er nået til sit resultat, og hvad vi kan bruge det til.

Learn more gray arrow
Abzu podcast - En verden af information

I samale med Fysiker og AI-virksomhedsgrundlægger Casper Skern Wilstrup ser vi på livets VIRKELIG store spørgsmål.

Learn more gray arrow
Casper speaking on Understanding the data we create

In under 2 mins: Why we have to understand what the decisions we make are based on and not blindly trust that a computer is right.

Learn more gray arrow
Abzu podcast - The Healthtech Podcast

Casper talks about how Abzu was founded, and everything healthcare and technology.

Learn more gray arrow
Casper Wilstrup speaking at TechBBQ 2021

A 17 minute video about Abzu’s origins and an impactful application of explainable AI in life science: diagnosing breast cancer mortality.

Learn more gray arrow

Contact Casper.

Ask a question, or book Casper for an interview, speaking engagement, or media placement.