Ethical AI use

AI ethics: The explainability imperative.

At Abzu, we're pioneering a future where the decisions made by AI are as clear as they are powerful.

The role of explainability in ethical AI use.

The essence of AI ethics lies in its ability to be understood by those it serves.

Today, everything seems possible with AI. But ethical AI use requires more than just smart algorithms; it demands explainability in how decisions are made.

Ethics of AI

Abzu is internationally recognized for explainable AI.

We have an unwavering commitment to ethical standards in AI ethics and data use.

Only explainable AI provides “the why” behind predictions. No prying, no forcing. Simply: Decisions you can understand.

ISO 27001 certified for trustworthy AI.

“Cool Vendor” for explainable AI.

Technology created and patented by Abzu.

Abzu is a member of the Ethical AI database.

Disclaimer.
Gartner® and Cool Vendors™ are registered trademarks and service marks, and the Gartner Cool Vendor badge is a trademark and service mark of Gartner, Inc. and/or its affiliates, and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read the 2022 Gartner® Cool Vendors™ in AI Governance and Responsible AI — From Principles to Practice report.

Abzu’s AI ethics framework:

We believe AI must be explainable to be trusted.

At Abzu, we’ve embodied this principle by making explainability the hallmark of our technology. It’s simple: Ethical AI is explainable AI.

AI Without the Mystery: The Key to Safer Technology.

A letter by Abzu founder and CEO Casper Wilstrup.

We’re living in an era buzzing with artificial intelligence (AI) chat. There’s one big worry though. A lot of AI is like a magician’s trick — we see the result, but we don’t know how it’s done. It leaves us guessing.

We call these “black-box” AIs. They give us great predictions, but they keep the “how” a secret. This mystery makes it hard for scientists to use AI to come up with new ideas.

There is a bright spot. A field called “explainable AI” is trying to crack open these black boxes. Popular methods like SHAP and LIME are like detective tools. They snoop on the AI, helping us understand what’s going on inside.

Still, these tools only go so far. They watch AI behavior, but they don’t really get to the heart of how it’s done. So, at Abzu, we’ve thought of a new strategy: Scientific AI. It’s about making AIs we can understand from the get-go. Like a math problem, they show the whole working process, not just the final answer.

AI safety is big news. Famed AI researcher Eliezer Yudkowsky worries that unchecked AIs could be bad news for humans. Picture an AI so focused on its goal that it doesn’t care if it harms people to reach it. Scary, right? Yudkowsky says we need to make sure our AI has safety locks and follows our values.

But how do we make AI safe? We think the answer is simple. Let’s use AI to build understandable models that humans can check for safety. We stay in the driver’s seat this way.

Here’s our four-step plan:

  1. We build smart AI “scientists” to come up with new ideas.
  2. These AI “scientists” create models we can understand.
  3. We make sure these models work as they should.
  4. Finally, we use these models to make decisions in our AI systems.


This way, we get the best of both worlds – AI’s power and human control.

Now, let’s imagine what this could look like in real life:

AI doctors could use models to diagnose diseases and recommend treatments. AI cars could drive using models that represent traffic and road conditions. AI factory workers could use models to build things more efficiently. AI builders could use models to ensure buildings are safe and long-lasting.

AI is developing fast, and yes, it can be scary. But, it’s not going to stop. So, we need to make sure it’s safe. Our four-step process could be just the ticket. By focusing on clear models, we keep control and make sure AI sticks to our values.

Working hand-in-hand with AI in this way gives us the best shot at a safe, tech-powered future.

Ethical AI: Our vision for the future.

Discover how Abzu is setting the standard for ethical AI with a commitment to explainability.

We ditched the traditional swag option for a more personal and green approach: We encouraged Abzoids to bring their preferred textile – preferably pre-loved – to screen-print our logo on.
In the life sciences sector, many AI applications fall under the high-risk category.
Dr. Andree Bates and Casper Wilstrup discuss how Abzu’s QLattice explainable AI algorithm can be used to accelerate disease understanding, drug design, and insights in Pharma R&D.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.