Understanding the data we create

We have to understand what the decisions we make are based on, not blindly trust that the computer is right.

This transcript has minor edits to improve readability. The complete presentation is available to watch on YouTube.

If we want to continue being the humans in control of the technology we built, of the society we create, then we have to understand what the decisions we make are based on and not just blindly trust that the computer's probably right.

I'm Casper Wilstrup. I'm one of the founders of the company, Abzu, where I serve as CEO.

Fundamentally, when you use artificial intelligence, or more specifically machine learning, to analyze a collection of data, you typically get a model that can predict an outcome. But it will not be able to tell you anything about why these things occurred.

So it will perhaps say: This person will not benefit from a drug, whereas this person will. But you won't, as a researcher, get any kind of insights into why that is.

So what Abzu set out out to do was to create a system that, instead of giving you these kind of black-box models as they are called, would instead actually give you the scientific theories that could explain the phenomenon that you were studying.

In the long run, I think explainable artificial intelligence of the flavor that we're developing is going to be, if not leading, then at least half of the market across virtually any specific business problem that you can mention.

And that's where we see the big long-term untapped potential of Abzu is to provide this fundamental technology that is being used across many, many, many different business verticals.

Thanks to Digital Hub Denmark and Industriens Fond for your support in sharing our vision to bring interpretability and explainability to life science data sets.

Newsletter sign up

Hi, Eli and Casper here! If you want to learn more about ethics in AI or the best place to hike in Tenerife, then sign up for our newsletter.