Abzu Illustration - Ethical AI

AI ethics and ethical AI use.

Explainability is key to ethical AI use. You simply cannot trust a model that makes a prediction without explaining its reasoning.

What makes AI ethical?

Ethical AI is explainable AI.

Today, everything seems possible with AI. But if you’re working with complex challenges or decision-making processes — processes that require personal information, like someone’s genetic profile, or a high-impact operation, like aircraft traffic — you simply cannot trust a model that makes a prediction without explaining its reasoning.

Abzu is an internationally recognized leader in AI ethics.

Abzu’s AI is explainable, resilient, and safe, and makes rational, evidence-based decisions.

Founded in 2018, Abzu established itself as a global leader in explainable, rational, and safe AI in the pharmaceutical industry.

Today, our technology accelerates new insights and understanding in pharma R&D and transforms critical and complex business processes for enterprise operations.

ISO 27001 certified for trustworthy AI.

“Cool Vendor” for explainable AI.

Abzu is a member of the Ethical AI database.

Abzu is award winning for trustworthy and rational AI

Disclaimer.
Gartner® and Cool Vendors™ are registered trademarks and service marks, and the Gartner Cool Vendor badge is a trademark and service mark of Gartner, Inc. and/or its affiliates, and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read the 2022 Gartner® Cool Vendors™ in AI Governance and Responsible AI — From Principles to Practice report.

Abzu’s ethical AI framework:

We believe AI must be explainable and transparent to be trusted.

A letter by founder and CEO Casper Wilstrup:

"AI Without the Mystery: The Key to Safer Technology."

About Abzu.

Transparency and trust are qualities integral to our technology and who we are.

Learn more about Abzu in just 3 minutes.

Learn about how we are developing new technologies to improve scientific discoveries, why our teal management structure fuels our innovation, when women shouldn’t be considered as outliers in STEM, and where we go winter bathing every Friday morning!

AI Without the Mystery: The Key to Safer Technology.

A letter by Abzu founder and CEO Casper Wilstrup.

We’re living in an era buzzing with artificial intelligence (AI) chat. There’s one big worry though. A lot of AI is like a magician’s trick – we see the result, but we don’t know how it’s done. It leaves us guessing.

We call these ‘black-box’ AIs. They give us great predictions, but they keep the ‘how’ a secret. This mystery makes it hard for scientists to use AI to come up with new ideas.

There is a bright spot. A field called Explainable AI (XAI) is trying to crack open these black boxes. Popular methods like SHAP and LIME are like detective tools. They snoop on the AI, helping us understand what’s going on inside.

Still, these tools only go so far. They watch AI behavior but don’t really get to the heart of how it’s done. So, we’re thinking of a new strategy, called Symbolic AI. It’s about making AIs we can understand from the get-go. Like a math problem, they show the whole working process, not just the final answer.

AI safety is big news. Famed AI researcher Eliezer Yudkowsky worries that unchecked AIs could be bad news for humans. Picture an AI so focused on its goal that it doesn’t care if it harms people to reach it. Scary, right? Yudkowsky says we need to make sure our AI has safety locks and follows our values.

But how do we make AI safe? We think the answer is simple. Let’s use AI to build understandable models that humans can check for safety. We stay in the driver’s seat this way.

Here’s our four-step plan:

  1. We build smart AI “scientists” to come up with new ideas.
  2. These AI “scientists” create models we can understand.
  3. We make sure these models work as they should.
  4. Finally, we use these models to make decisions in our AI systems.


This way, we get the best of both worlds – AI’s power and human control.

Now, let’s imagine what this could look like in real life.

AI doctors could use models to diagnose diseases and recommend treatments. AI cars could drive using models that represent traffic and road conditions. AI factory workers could use models to build things more efficiently. AI builders could use models to ensure buildings are safe and long-lasting.

AI is developing fast, and yes, it can be scary. But, it’s not going to stop. So, we need to make sure it’s safe. Our four-step process could be just the ticket. By focusing on clear models, we keep control and make sure AI sticks to our values.

Working hand-in-hand with AI in this way gives us the best shot at a safe, tech-powered future.

Abzu's perspective on AI ethics.

Navigating AI ethics and ethical AI application.

Casper Wilstrup explains the symbolic AI behind Abzu's proprietary QLattice®: An AI that generates explanations along with predictions.
Casper joins EU-Startups podcast not only to talk about the benefits, but also the real challenges of startups building their tech stacks on black-box AI infrastructure, and the issue of uncertainty over upcoming regulations from the EU and other regulatory bodies.
According to my naturalist view, all stuff — including AIs — has some level of proto-consciousness. However, I’m fairly certain that current AIs aren’t experiencing unified consciousness like we do.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.