Welcome to The Healthtech Podcast where we talk about everything healthcare and technology. And I'm your host, James Somauroo.
Hey, everybody, so this week I'm joined by Casper Wilstrup. Casper founded the Danish artificial intelligence startup Abzu, along with seven brilliant, friendly, and quirky humans, so I’m told. Their goal was to challenge the fundamental assumptions of contemporary black-box AI.
And today Abzu is 17-employees strong, and its transparent and explainable AI is solving problems in big pharma, exposing previously hidden relationships in drug discovery, clinical trials, and biopharmaceutics.
Casper, welcome to The Healthtech Podcast. How you doing?
I'm doing good. Thank you very much, James.
I immediately feel out of my depth in the conversation that we're about to have about AI and technology, but I’m looking forward, as I said previously, to take my listeners on the learning journey with me.
Whereabouts are you speaking to us from today, Casper?
I'm located in the northern part of Copenhagen, at our office here. Right by the sea, actually. Looking out on this sunny ocean, actually.
Many of your 17 employees with you today in the office?
Well, about half, I think. We also have five of our colleagues actually located in our office in Barcelona. So they are there. But of the Copenhagen staff, I'd say about half tends to work in the office on an average day, maybe a little bit more than half, with the number going back up after what we've all been through.
Welcome change. So, Casper, the way we start these episodes is we get you to tell your story. And I imagine it's a fascinating one, given what you're doing now. So I'm looking forward to hearing it. So, by all means, tell us your story.
All right, well, I guess in in terms of what I'm doing now, my story probably starts when I was about nine, which was the first time I got my hands on a computer. Actually, it was a ZX81. It was one of the first early consumer computers available in, I think it was launched in ‘81 by Sinclair Studios in the UK. And I got myself one of those. Actually, it was my friend’s, but I more of less took it from him.
And then I got a say ZX Spectrum a little later and that sealed my fate. I became a power computer nerd. I remember sitting in my parents’ living room, coding in assembly, actually, when I was about 12 years old, because I had gotten tired of the BASIC interpreter incorporating that machine.
So pretty, pretty nerdy way of behaving as a young boy. Not like very many other people in the early 80s because nobody had computers back then. That's different now.
So anyway, that, I guess, sealed my fate. But I never really thought at the time that I was going to be working with computers, it was more like computers was a play thing. So I've always fixated on more sciency things, and I wanted to be a physicist.
So the next thing that's probably worth relaying about that, other than just normal youth, is that I decided to study physics. But also in physics, I got very involved in using computers to analyze data. And in physics, that's constituted primarily of either experimental data or to try to simulate theoretical approaches.
So I got very much into the simulation part of physics research and started working on building very early high-performance computing clusters to simulate, really, quantum fields, actually. But, actually, these kind of computers are used to simulate all sorts of things. But I was particularly interested in, fascinated by the idea of, simulating quantum fields, which is a difficult and strange problem where you simulate infinite worlds, by kind of trying to chop up time and space in fascinating ways.
So, and to this date, actually, that approach is still the most promising approach to analyze quantum fields and understand quantum physical reality at the lowest level. But my attention turned more and more to the building of computers that was able to analyze this data rather than thinking so much about what they physically meant.
So that took me on a journey where I started building more and more custom hardware solutions, actually. And then in ‘99, I think, I decided that… At that time I actually created a kind of a network interface card. It’s a small hardware device that allows you to build more efficient, high-performance computing clusters.
So I did my first startup company, we decided to do it in ‘99. Me and another student at the university I founded the company together with, we actually launched the company in 2000. So that got me out of physics and into this wonderful world of building companies and startup companies, particularly in the tech space.
So the world looked different in 2000. It was like just before the .com bubble. So we had some very early successful traction with our product, we sold it to all those, well, the older among them listened to me, actually. Remember names like Yahoo, and here in Europe, we had like a cousin like that here in Europe.
So those kind of companies were our customers. But the .com bubble burst, and we were hit by that, and that kind of took the company that I founded back then on a fairly rocky ride. So we didn't die as a company, but we didn't succeed the way we thought we're going to either.
So a couple of years after that, I actually handed off the company, the medium successful company, off to a French capital firm. And then I went into the consultant business. And I guess I more of less got involved with building a consultancy company. We almost built a management consultancy company, but one of the more nerdy ones out there, based also here in Copenhagen. And I spent about 10 years in that company.
But what a lot of the consultancy we did was actually working for different venture funds, doing tech scouting, due diligence work on everything that they contemplated investing on. So I kept this connection with the startup ecosystem for all those years. So I worked with several different venture funds, but mainly some located here in Denmark. And I think I did tech due diligence, that is an assessment of the technical feasibility of a startup idea on early phase startup company, on more than 80 different companies over those years. I've seen a lot of different startup companies and seen a lot of different ideas and have formed a lot of opinions.
Actually, I also ended up joining one of these companies. So I took a three year stint, I did two builds under an AI company called Blackwoodd Seven that I was quite fascinated by. Not so much of what they were doing (it's kind of like analyzing market efficiency data for marketing purposes). But still, the challenge of analyzing huge amounts of data and uncovering structures in that. So that was, I think, 2014 or something like that, where I did due diligence on them. And then I decided to jump on the other side of the fence and become a member of a startup team again, so I was at Blackwood Seven for several years.
And this is where the story, at least in terms of my current company Abzu, starts to sound interesting. While I was in Blackwood Seven, I realized that there was an idea that had been bubbling in the back of my head, that had been bubbling in the back of my head since, I guess, ’95, that you could use quantum fields to simulate machine learning or artificial intelligence models.
So that idea, actually, it wouldn't be fair to say that it occurred to me back then, because it actually occurred to me in ‘95. But I realized that perhaps the technology and computers had evolved far enough that it was perhaps feasible now to build a quantum simulator to do artificial intelligence.
So I left Blackwood Seven, I think it was ’17, and then I decided, and then I kind of took a year of sabbatical, thinking about what to do next. And I did some early prototypes of this idea. And I went around, I guess most of Europe, and connected with people that I've gotten to know over my many years in startup and tried to put together what I thought would be the ideal team to realize the idea of quantum fields for AI.
And that's why the founding team became pretty big with the seven founders. It's not common for startup companies, but we felt we needed some very strong intellectual resources to actually build this rather crazy idea. And the best way to put together a brilliant team can sometimes be to just involve them as founders.
So the people founding Abzu are from Italy, Spain, and Denmark, and it’s more or less a coincidence that those are the locations that they're from. It really was just the best people within, say, high-performance computing and artificial intelligence and systems-level computer software engineering that I've that I've gotten to know over the years.
So we founded Abzu. And that brings us to 2018, where the company was formally founded. And then the last three years has been spent on first building this technology, proving it works, and now commercializing it in the pharma and health science fields.
What a story, that is, and I mean, I'd love to be a fly on the wall at a pub conversation between you and the seven founders. Goodness knows what you guys are talking about. Well, we might come on to that in a second.
I want to take you right back, first of all, take you right back to being nine years old, and about computers. Because you were clearly fascinated by computers, there was something that motivated you to take computers apart to kind of almost develop this, like, symbiotic relationship with computers that you had. Which is interesting to me, because I certainly felt something similar, by no means to the extent that you have, but technology did always fascinate me.
What do you think it was that fascinated you about computers? What was it that was driving you to take them apart, to learn to code? What do you think it was?
Well, first of all, I had an advantage that I really think it's a shame that people don't have nowadays, which is, back when computers was so simple that you can actually understand them.
Imagine that you bought a Ford T, and you could take it apart, I'm sure, and put it back together. And it was probably pretty easy to know what a car was and how it functioned from everything.
So I was from that era. There was nothing in my ZX Spectrum that I didn't understand. I knew every single chip in it, the effect of every single machine instruction I could do to it. All the memory space was mapped out to the kind of the IO device, and it was… it sounds impressive, but it wasn't hard. It's like a machine—the computers were simple, they really were. So that I think was a huge advantage that my generation of computer scientists had. There was no way of not learning them from scratch if you were actually curious.
So, first of all, that was certainly an advantage that I wish more young people had today. I mean, I wouldn't even try to pretend that I understand a modern PC like what I'm speaking on here. It's like full of stuff that I have no idea about how it works, and I probably couldn't learn in lifetime.
So that, I think, is an important point in shaping my early years. Then there was certainly something about... So computers as a concept, just the idea of computing as a procedural thing, or what Turing called the “Turing machine”, was just a just a conceptually different way of thinking about reality. And I think I just thought there was something in it that I latched on to already, this nine year old, that you just—I remember doing—silly—I think my first program is a software program, so things like saying, “Hello I am your ZX Spectrum, what is your name?”, and I type, “Casper” on the keyboard and, “Hello, Casper, how old are you?”
And I was just fascinated by the fact that I typed “Casper”, and now the computer was able to use that word back to me. I guess that kind of fascination probably is hard to muster today when you grow up with the computer so powerful as we all do. But, so, I mean that is still my favorite: just this modality, this ability to be creative and create machines or create software that did things that just couldn't be done in any other meaningful way.
There’s something as well for me in your story about the, and maybe I'm reading too much into this, but also it seems to me that the concept of infinity… the concept of limitlessness… Because you talked about moving on to simulating quantum fields and I'm not going to pretend I know what that really is at this point. But chopping up time and space to simulate infinite worlds as well. I mean, the fact that you speak with such ease that that was something that you were doing that you wanted to do. It seems like that was fascinating you and, not to put words in your mouth, but I suppose exploring so much possibility, exploring so even what you said there about realms and fascination of what could be possible, there's so much there about “What's the possible?”, “What's infinite?”, “What's at the other end of all of this?”
I suppose when you have the ability to grasp that at a young age, and you allow yourself to be fascinated by that, you can't help but surely see where that goes.
I guess not. I mean, so, my fascination with physical reality was probably about the same age, but maybe a few years later where it kind of dawned on me. So, I guess, another point in my life story is that my mom was a fanatical collector of science fiction novels.
So she had every science fiction novel from the 30s and up to like, the 80s, where I started reading them on our bookshelf. So I also had the science fiction — both conversations with my mom, but also reading them once I learned English — that was probably very formative for me.
Also science fiction is, for some people, it can be gateway into science, whether you want to know how does this really work. And particularly this flavor of hard science fiction that was very popular in the 80s. And I think what quickly dawned on me is (I take it like second nature now), but I know that some people don't really think that way. But when I look at reality around me, I don't think that I see reality. It's not like reality is this thing. This is a very high-level abstraction and emergent thing that I can absorb with my senses.
But I'm not really challenged by it. It doesn't really worry me, for instance, that a particle can travel from A to B by taking a superposition of an infinite set of paths between those two points.
It's like, I know that when I say things like that — “particles go from here to there, not by a straight line, but taking a sum of all paths” — people say, “What?” But it never really bothered me. It's just, well, they do.
And if you just integrate all those paths with each other it just looks like it's a straight line, and that's why we call it a straight line. So it's not reality that that colludes to make a straight line. It is the appearance of this actual underlying reality that looks to us like a straight line. And, I guess, from having thought along those lines since being a child, I guess it just doesn’t sound strange to me.
It's so interesting, because I, the other day, was reading about the double slit experiment. And I'm not going to butcher the explanation of it for those that don't know, but let's just say: Particles completely change their behavior when they're observed versus when they're not. And part of me was frustrated that I couldn't wrap my head around that. I couldn't wrap my head around: as soon as you turn on a machine that detects where they are, they do something different.
Part of me was extremely frustrated. And part of me was just, kind of, I allowed myself to feel the wonder of that. To feel the wonder of how much out there do we not know. And that is wonderful, I find that wonderful, I find that extremely engaging. And it makes me want to have conversations with people like yourself to learn more about other things. And I love that.
Yeah, well, I mean, the double slit experiment is such a good starting point for wonder, because it's just, as you say: reality retrospectively makes up its mind about what happens when we observe it. And who are we to change reality retrospectively? How can that come about? But again, remember that is just the appearance of it, right? So the slits are being observed also. So the slits don't have to make up their mind about where they are until you observe them. So the entire thing is a thing in flux: we create all these. The reality is created relative to the observer.
And here we have, we're talking about, there is one thing that I will not even try to explain and that’s the notion of consciousness. What does it mean to be an observer? But for quantum physics, that's not really important because a thing can observe another thing. And so you can create these chain webs for uniform wave function to collapse, technically, they just have to be observed. But you can't really know everything you observe unless you observe that thing and so on and an infinite chain until you reach you. So at the end of the chain, there has to be not a human, but you. It has to be me who does that. And that's, it's that's not really, but again, like, this can all be expressed in nice mathematics. So it sounds crazy, but it's, it's okay.
That's where I want to move to now, right, because, like, whilst I could talk about the intersection of philosophy and physics all day, I'm interested in how all of this relates to what you do now with Abzu.
So you are, well, you tell me. I'm hearing words, quantum field simulating machine learning, simulating AI, chopping up time and space to simulate it. Tell me about quantum fields. Tell me about what a quantum simulator is. And tell me how that now helps pharma companies to build medicines that can help humanity.
Yep, let's, let's try. So starting with quantum fields. Actually, we could return just to the double slit experiment. So you have a particle that goes through, from… You emit it and then it hits a detector on the other side of the double slit, and it takes a route that is some sum of all the possible routes you can take.
But since you've created two slits, it has to go through either one or the other, right? So you force a particle to take one of two routes. But, actually, that's not true. The particle can take any route it wants to, except that there's one constraint: that it has to go through one of the slits.
So when the particle moves from the source to the target, it takes all possible paths through space. But a lot of those paths just cancel out. So when you end up observing the particle, you see it has taken a single path. But you can't know which path that was until you observe it.
Now, that allows you to repeat the experiment again and again. And every time you observe it, you get a different result, right? The particle is taking a different, slightly different path every time you measure. But there's more likelihood of it taking certain kinds of paths than others. So that generalizes to many particles. And what we do in our technology is that we set up these kind of experiments where we send particles off from some source, all in simulated computers, and then we calculate the number of different trajectories they can take through space until they hit the detector.
And that list of possible trajectories that it could take is actually infinite now. So it's an infinite list, just like in quantum physics. So chopping up space and time here just means to take this infinite set of functions. And by saying, well, the particle can take any fixed route, it has to go through certain vertices in a grid, then we can then we can make this move from the continuous space into a discrete space. So essentially we can generate trajectories that are no longer from an infinite space, but just a very cute like 10 to the 100th space. Like there's more possible trajectories, still, the particles can take in the simulated reality than atoms in the universe, but still, we can sample this space. So that's the next step.
Now we run these simulated quantum experiments, and we sample them (the trajectories). We take those trajectories out and then we try to think of those trajectories not as trajectories of particles, but instead as mathematical equations.
So, for instance, if two particles meet in a certain way, we think about that as the addition operation. If they meet another way, we think about that as multiplication. If they do a certain kind of twist in space, we're thinking about that as a logarithm. But it's just in our mind, it's not like that in the simulator. That's just how we do it.
So whenever we sample a trajectory, we can immediately just translate it back to a mathematical equation. So now we have the code, the Abzu technology. It's a technology that allows us to sample mathematical equations, where the inputs are particles, and the output is a detector. Got it? Good.
So, next step: take a data science problem. Like, say you have collected data about, let's say, take something really simple: the age and the body mass index of a number of people. A is age, and B is body mass index. And then you're able to predict the probability of C. Let's make C stand for cancer. Then your question is: I would like to understand how age and body mass index relates to, say, cancer.
So that's an equation, right? It could be any equation: log(age) plus the square root of your body mass index.
And you can make that equation arbitrarily complex as you please. But the best model with that can be noisy, and maybe you can't even predict the probability of cancer. But if you can, then you can express cancer, the probability of developing cancer, as some mathematical function of age and body mass index.
So: in comes our technology, which, by the way, is called the QLattice® after quantum lattice.
And we use that to sample this infinite space of mathematical functions, letting A age, B for body mass index, and C, cancer, interact with each other.
So you can sample the infinite set of all mathematical functions in that way. So let's say that we do that, and then say we get 2000 equations out. And then we can compare those equations to some kind of observational data we have. And we will know which one of these equations match best.
Now we tell back to QLattice what we liked and what we didn't like, essentially saying, oh, but this equation number 172 was way better, so we sort the equations by their fitness. And then we update the probability fields in the quantum field, and we sample again.
So we gradually converge the probability field in the quantum lattice, in the QLattice, so that when we sample equations, they are more likely to be the best equation to explain the data.
But it's a continuous loop, right? We pull functions out, compare to data, pull functions, report back what we liked, what we didn't, and so on.
And that, we can demonstrate over time, that converges to us having (with a very, very high probability) found the best possible equation to relate these things to each other. And here’s the point: that equation is the simplest equation that can explain the relationship.
So let's say that the data was… That there is a relationship that governs this. An example I quite often use is let's say that we took the data that Albert Einstein had in 1904 when he was thinking about the special theory of relativity. If you take that data, which was coincidentally collected by two physicists called Michelson and Morley, and run it through? The thing that comes out is what's called the Lorentz contraction factor, or the mathematically expressed expression of the special theory of relativity.
So, what comes out is not some kind of model that fits the data, it is the mathematical formulation of the theory of relativity, the special theory of relativity. So that's cool, right? If Einstein had this machine, he would have been able to just take the data, run it through, and out would come his own theory.
He wouldn't, of course, have known why that theory was about, so it would still left a lot of brilliant room for Einstein’s brilliance to say, “Oh, but that seems to be an equation that expresses time contraction, as coordinate systems move relative to each other.” But at least he would be able to do that, right. He would have an opportunity to do that, because the equation was given to him. And now he just had to interpret it.
So perhaps a less brilliant Einstein could have just formulated the theory of relativity. If he had had the QLattice. Now that has already been formulated, so we're looking at a different problems nowadays. But essentially, the core ability of the technology is just that. Just extract simple equations that can explain your data.
In, let's say, in health science, where I've done a lot of the actual work I do. So, we have to focus a lot on pharma and health science in Abzu. So I myself have been doing a lot of work with actual doctors analyzing health science datasets. So what that means is that, that we can take some clinical research data of some kind, or health records of some kind, then we can extract simple mathematical equations that predict things.
So, for instance, a paper that I'm working on with a collaborator is about figuring out what causes preeclampsia. And we’ve actually found some, as yet unpublished, very interesting results about nonlinear relationships between certain serum measurements in these women that are very strong predictors for preeclampsia. Stronger than what we've seen with any existing models for preeclampsia. And the model is not complicated; it's very simple. It just highlights a certain interaction between certain serums. I'm not a doctor, so I won't really try to interpret exactly what the model is saying, but it’s a simple equation. And it does give a better predictive power for why women develop preeclampsia. And you can give it real biological meaning, because these are just serum measurements.
So that's an example from health science. In pharma, we're doing a lot of work, we’re working in drug discovery. That isn't me personally. But some of my colleagues, where we’re analyzing compounds. So, in that space, we're not talking humans, it's just compounds, molecule, some kind of antisense oligonucleotides, or siRNAs, or mRNA vaccines, or whatever. Some kind of compound, and we want to understand why it has certain effects.
And perhaps, let's say that the thing we want to understand is why a certain compound sometimes causes damage to the liver, say hepatotoxic. Say that is what we wanted to predict. Then, if you take our technology, what comes out are simple equations that can relate the structure of the compound to the probability of it being hepatotoxic.
Which gives the researcher using this technology the opportunity to say, “Not only can I predict whether a compound is going to be hepatotoxic” — machine learning techniques like random forests, gradient boosting, neural networks can do that already — but you can actually also look at the equation and say, “What is it saying?” And then it could perhaps some examples of what you can say is, “Well, it seems to be that there are certain antisense oligonucleotides, which are small strings of DNA, for instance, it could be that there is a certain sequence in the five prime end of the gap of an antisense.
And I don't expect the listeners to really know what that necessarily is. But, anyway, there are certain structures of the molecule that has a strong causative effect on the probability of being hepatotoxic. So the researcher can then go back in the lab and design around that, based on knowledge and not by random guessing.
So I think this stands in comparison to machine learning techniques as they are currently mainly applied in almost any field where you have a data set and fit some kind of model to it, and you get a model out that is able to predict. And then you can use that model to do screening, right? You just create compounds and predict: How likely is it that it's going to have that effect? How likely is it that this patient is going to develop that disease? But you can't really inspect the model and say, “What is it saying? What is it doing?”
So here we go into this to what people are calling… I like to call it “inspectable artificial intelligence”. It's also called “explainable AI”. It's like, you get an explanation rather than just a model. And that, to me, is crucial. Personally, I think that's crucial in any field. I'm actually quite skeptical about building predictive machines that predict things where we don't, we can't really explain why they do what they do. Because that means that they only do what we do, as long as they're used within the scope of what they were trained on. We don't know how they behave outside of their training data. Which is, I guess, in some cases, it's fine. But I'm skeptical.
There's a lot that I want to talk to you about here. That was a heck of an answer to my original question.
I suppose, to confirm that I suppose I understand what your technology does, it is finding the simplest equation that explains the relationship between various sets of data. And that becomes very interesting to me, particularly in healthcare. Because quite often, I ask myself, and I suppose back to a bit of philosophy, I’ve often asked myself in healthcare, like, what are we, what are we aiming for? Like, what are we actually aiming for?
Are we aiming for the perfect diagnosis? Or we're just aiming for the healthiest patient? And how do we define healthy? Do we include happy? Do we include “well-explained to”? There's lots of things that make up what is actually good clinical care.
But I suppose if we were to say, the perfect diagnosis each time, and that's what we're aiming for, then if we take your technology, and we give it all the data that it needs (and that is also a question: What would be the perfect data that it would need?), in theory, then, just as you've said, in a few of your examples, the… Well, your preeclampsia example, you know, you can get to a point where you can start to explain why that happens for disease processes that we're quite unsure of currently.
And then my mind goes to: Well, if you can map the genome, then you can add in what everybody's measuring on themselves — “the quantified self” — you measure everything that, you know, the patient is eating. Everything, every place that they've been for air quality, of things they might have breathed in, for every tap water they might have drank. The more data that you're feeding this with, in theory, you just get closer and closer to the perfect diagnosis or the perfect predictor, which becomes very interesting. Have I understood that correctly?
Yes, that's a heck of a question that there are so many answers I want to give to that. But… To start with: I think that there's a notion that… I think it's important that you realize when we work with machine learning or fitting models to data in general, is that there are three types of reasons that your model is not fitting particularly well. And it's important, I think, when you work with those kind of approaches to understand what is the most likely scenario for your model not fitting particularly well in the circumstance.
So the easiest, I guess, reason to understand is that what you want to predict cannot be predicted from the data. So if I tried to predict tomorrow's weather from what I had for lunch, that is not going to work. No matter how much I train. So that, I guess, is what we call the base error, because sometimes you can partially predict it from data. And so, given that you had the perfect model, there would only be a model that is so-and-so good, given the input data that you give it to. So that, we call that base error.
Then you have noise, which is the situation where you don't really know if the measurements you put in, all the things that you wanted to predict, if you measured that correctly. So maybe you didn't actually have that gene, or the thought you had. Or maybe you didn't actually do what you said you did. And so that's essentially noise. And you could think about that, that's separate from base error, right? If you could just measure it more accurately, you'd be able to predict it more accurately or fit better models. So that's the second category. You can never really know whether you are noisy or whether you just have a base error. Actually, that's one of the challenges that a lot of people have: you can't really, you have to just intuitively guess that.
Finally, there is a possibility that the modeling technology or the way you try to model is just wrong. Like, say that you're using a (like a lot of people do in health) a linear model. Then if the effect you’re studying isn't linear, then you're not going to get a good model. So you can do that as much as you want to predict the probability of developing disease as a linear function of age, you can only get so good because most diseases are actually exponential in age.
So, but then, you know that. And then if I want an exponential function, but what if they're not then based on natural-based logarithm exponential? What if it's actually a square, a square function or whatever? So if you fit a fixed model, or a fixed model technology, you can only get up to the limit of what that modeling technology can express.
So I think that our technology, the QLattice, takes the last part out of the picture. It will essentially search through the infinite space. So it will, if there is a mathematical relationship, it will find it. So that's good.
So now we are back to problem one or two. And in in a lot of omics cases, like genomics, we try to predict a certain diagnosis from genes alone or from the genome alone. I think you're in the base error situation.
There are just a lot of things about your outcome that can't be predicted from your genes. Then you can enrich that data set with other types of omics data, like sensor protein measurements, or other things. Then, of course, a technology like ours will be able to say: Is there a better fit, in that case? But you're still probably in the base error space, although you fare better.
And then you have a lifestyle change the choices that you need to add. And, at the end of the day, you can add all this data in, you will be able to get pretty good models. Except that there is a piece of base error that you'll probably never come to, which comes from random fluctuation.
So given any kind of problem, you will always have the base error of random fluctuation. And sometimes that's small. In Einstein's theory of relativity, random fluctuation still plays a role. Planets can fluctuate a little bit, but it doesn't really matter to the overall model.
But in other situations where you have an extremely complex system, like a human organism evolving, the situation is different. It certainly is: a randomness is just a factual thing. So, I think, let's take something like genome-wide association studies that have been applied to genomes in many settings. They think they are linear in nature. And that's wrong to begin with, because genes interact. But they're also wrong to assume that you can even do these predictions on diseases based on the genome alone. And even if you go to multiomics, it doesn't mean that we shouldn't try. I mean, there's a lot of probabilistic diagnosis that can be done by analyzing this data where I think a technology like ours could play a role. So those are my thoughts.
Wow. And so I’m interested now, I suppose, as we start to wrap this up, with Abzu and everything that you're doing. You've talked about pharma companies, and the things that you can do there in drug discovery and clinical trials and biopharmaceutics, things like that. I suppose in that in that more health science stuff that you've talked about, what is the ambition there, I suppose. Short-, medium-, long-term; however you want to answer that.
And how close are we? The reason I ask is, well, is because I think so many times on this podcast, I interview guests where all we can really hope for is incremental change. And I think that's quite a common thing in healthcare innovation because you can't move fast and break things because patient care is, as you know, at risk. And, you know, for things that are clinical facing. And when it comes to improving technology, it often is incremental change. And we are looking for percentage increases and decreases and that is what we can hope for.
I think here, though, it's rare that I get so, I suppose, inspired. Or so into my own head about thinking of the potential, and it seems to me here that the potential is something that's quite important. I suppose when it comes to things like hope, when it comes to things like what can be different in future. I think a technology like yours (and a mind like yours, frankly) is inspiring. And it can provide a lot of hope that we can have a stepwise change in healthcare and something can significantly improve.
So when it comes to health science, and we can talk about pharma too, but when it comes to those — that preeclampsia example — and other things: What can we expect here?
Well, I think we can expect a lot. I think technologies like ours is going to be… Perhaps a leap is an overused word, but there really is, there is, like, a treasure trove of information in the data that has already been collected in health. And analyzing that with a system like ours would yield a lot of interesting results.
It happens every single time, I can tell you. We take a data set, and we analyze it, and we find nonlinear, interesting relationships for which the doctors that I'm working with go, “Wow.”
Almost every single time. So the problem is actually in the data. And so we have, so health data is actually quite often noisy. It's not necessarily collected in a structured way; it has to be cleaned up and realigned, and even then you have noise just in the measurements.
So that's a challenge. On top of that, of course, we have the challenge of access to data, right? So what if, saying here in Demark, we have some pretty good databases of health data here. But it's not like I just run them through the QLattice. There's a lot of legal regulatory requirements. As there should be.
However, I would like to highlight that that's one area where we are also in a pretty strong position with our technology. Because the QLattice, as I explained it before, never actually sees your data, right? It just generates models, you take those models. And on your data, compare that data to the graph to the equations. So the equations come to you, and you don't send the data back. You just tell the QLattice which equations you liked. The QLattice knows nothing about the data.
So that gives us at least an opportunity to talk to regulators about analyzing health data with the QLattice. They do it right there on their own machines. So that's also the way the preeclampsia case was done, for instance, and a lot of the other projects we’re involved in. But, still, I mean, structuring data and collecting structured and non-noisy, non-messy data about health is… would be a game changer.
There's a lot of interesting initiatives in that field where we're seeing things that have little to do with our technology, it's just about getting the data there and preparing it. And then, so I think we see things coming together. And perhaps, I think, we'll see some major leaps, and Abzu wants to play a role in those major leaps.
We often… Why are we in life science? Almost everybody in Abzu is a scientist of some flavor. Either a biologist, or a physicist, or… So it's natural for us to work with people who have a scientific mindset. But it's also because: What is the greatest place where we can make our new technology make a difference? I mean, marketing? Financial analytics? Customer churn prevention? Nah. This, this is what matters, right? This is this is something, at least, that gets me up in the morning. Even though I'm a physicist, I still care about humans and health.
I totally agree, and I think one thing that is really interesting to me there is, again, you know, 200 episodes deep of this podcast, you know. Goodness knows how many entrepreneurs I’ve spoken to who want to change healthcare in a meaningful way. The amount of times that the conversation comes back to interoperability and structured data is just obscene. Like, we often talk about it here as the boring bit. But it, honestly it is, that infrastructure piece, whether we’re talking about, I don’t know, like, a new blood test machine or quantum physics. It comes back to, like, have we got an electronic medical record? Is it all structured data, and do all the systems talk to each other to make everything more efficient?
It seems that no matter what I’m talking about all the way up to quantum physics, it comes back to these same things. And if anybody ever questions the role of policy, the role of government, the role of all of those things that can actually move the needle (particularly somewhere like the UK where it’s a public healthcare system and the NHS). You know, we can talk about venture funding and private equity funding until the cows come home.
But here you’ve heard it, for our listeners, that even when it comes to quantum physics… If we can’t structure our data properly and make all of our systems talk to each other and are interoperable, then we can’t advance this thing. So that’s kind of a call-out to people who can make a change in all those other structures, too, because, ahhh. Man. I’ve been there in policy. It’s hard. It’s slow. It’s difficult. But, my goodness, is it still important.
Um, Casper, for our listeners, we have listeners across everything: entrepreneurship, policy, technology, healthcare clinicians. Do you have any asks of our audience in terms of the people that Abzu wants to connect with, in terms of personally any way you want to see this field going? Any asks of the people listening?
Well, if you have an interest in health science problems, then do reach out. I mean, the technology is there, and we are — actually a couple weeks from now — we’re going to launch what we call the community QLattice, which allows researchers to use this technology for free. But it’s a limited resource. It’s a pretty expensive simulator. So, it’s going to be free, but kind of an allocated QLattice on our free resources.
And if you have a project, or if you have, particularly if you also have the data, then we’re very interested in helping out.
Please let me know when you’re doing that, because I will certainly amplify that message, that resource, out to my network. Because, as I say, rarely am I so inspired. You know, superlatives! Till the cows come home, I could do this.
But I am genuinely inspired by what you do, I think it is genuinely incredibly interesting. And not least because of that stepwise change that we talked about, but the ability to find the simplest equation for data sets that relate to, I mean, my goodness, the potential there in healthcare that you talked about is… Pretty much unmatched, I think, from other guests that I’ve had on this podcast.
And so, Casper, it’s been an absolute pleasure having you on board. I think we’ve covered everything, haven’t we, from philosophy and physics to quantum to healthcare. And it’s been a pleasure talking to you and thank you for coming on.
Thank you very much for having me. It was really great for me to be allowed to talk about all these nerdy details, and not just on the high level.
If people want to get in touch with you, or indeed the company, what’s the best way for them to do so?
Well, our contact details are all on our website. Do reach out either by looking up our details there, or on LinkedIn, or whatever is your preferred method.
We’re a very open company and everybody at Abzu is eager and pleased to speak to people who have interesting things to talk to us about. So. Phone, email, LinkedIn, go ahead and reach out.
Thank you so much.