How Pharma R&D can use AI to accelerate exploration, understanding, and insights.

Dr. Andree Bates and Casper Wilstrup discuss how Abzu’s QLattice explainable AI algorithm can be used to accelerate disease understanding, drug design, and insights in Pharma R&D.

The AI for Pharma Growth podcast is available on Spotify and Apple Podcasts.

On AI for Pharma Growth:

AI for Pharma Growth is the podcast from pioneering Artificial Intelligence entrepreneur Dr. Andree Bates created to help Pharma, Biotech and other Healthcare companies understand how the use of AI-based technologies can easily save them time and grow their brands and company results.

Transcript

Dr. Andree Bates:

Today’s episode is how Pharma R&D can use AI to accelerate, explore, and understand to get new insights.

With explainable AI accelerating discovery, using AI is becoming a critical competitive advantage in Pharma for obvious reasons. But using AI for accelerated and automated disease understanding and drug design, and even bioinformatics, can actually speed up the creation of good drugs and get them to market to the patients that need them much faster.

Today, I’m with Casper Wilstrup, the CEO and founder of Abzu, who has used AI to create automated explainable AI.

Thank you for coming on the show, Casper.

Casper:

Thank you very much for having me.

Andree:

You’re very welcome. Could we start with you telling us a little bit about your background and how you got into this?

Casper:

I’ve been working with computers since the 80s. I got my first computer as an 11 year old boy back in 1981. Computers have defined my life ever since.

I did end up studying physics in the 90s. Not computer science, but even there, it was mainly about using computers to do data analytics in physical problems.

I founded my first company back in ’99, a company that built hardware for high performance computers to simulate different processes, including quantum fields. And I’ve been in the entrepreneurial space since, both working as a founder in my own companies and also working on the venture capital side for several years. So I’ve seen the the entrepreneurship ecosystem from both sides, particularly here in the Nordics where I’m based.

I’m based in Copenhagen today. I’m the founder of a of a company called Abzu, and at Abzu we focus specifically on symbolic, explainable artificial intelligence, which is what we’ll be talking about today.

As the CEO of the company,  I’m still very deeply involved in the research and development of the algorithms and the methods that we have developed and now are bringing to market at Abzu.

Andree:

I’m really excited to get into some of this because, you know, explainable AI is such an important thing when you’re in a really highly regulated industry like Pharma.

But before we go into that, what were the key challenges and gaps that you saw in the market when you decided to start Abzu?

Casper:

Abzu is about bringing an approach to artificial intelligence (called symbolic artificial intelligence) back to life.

It’s actually a very old idea in AI to use artificial intelligence to come up with new symbolic understanding or new explanations for phenomena in in the real world. But it’s been lying dormant for quite a while, and a lot of AI has instead been focusing on a different approach to AI that is characterized by neural networks and deep learning, which approaches things in a slightly different way.

So when I established the company back in 2018, the idea was to bring symbolic AI to the forefront again and also to combine this with other approaches to AI, like the neural network approaches, to deliver a kind of AI that could really explain phenomena and not just predict or make decisions.

So from the beginning, we were focused on AI to explain things —not particularly in Pharma and the life sciences, that came later — but for the first two years of the company’s existence, we were just deep tech research. Just developing these algorithms that we have now brought to the market.

Andree:

I actually read about your QLattice explainable AI algorithm. Could you tell me a little bit about that?

The QLattice:

Casper:

The QLattice is the AI we invented that approaches AI from this “explainability first” idea.

So the QLattice is all about encapsulating understanding of the world in such a way that we can use the system to generate hypotheses about the way the world works.

So the QLattice can be thought of as a source of explanations that might or might not fit a problem that you’re studying. So in that sense, it’s an idea generator or a hypothesis generator that can that can generate explanations for any kind of phenomena.

And then you can train the QLattice on data, abstract data or general data, and it becomes better and better at understanding the world that it has been trained on, and therefore at it will generate even better explanations. But ultimately, the QLattice is designed to generate explanations with no data, which which means that you can actually use the QLattice once you have a QLattice that has been trained on the generic corpus of of some kind of understanding

The QLattice can come up with new theories about things that were not there in its training corpus, such as diseases or molecular properties, and explain those theories in such a way that it’s fairly easy for for a researcher to then validate the theory and check whether it’s actually true.

Andree:

That’s really interesting. What about the benefits of this on things like disease understanding, like biomarkers?

Using the QLattice for disease understanding:

Casper:

So: Disease understanding, for example. Very often a lot of AI these days are focused on trying to predict whether a person will become sick or not. And that’s not really the scientific approach.

The scientific approach, conventionally for several hundred years, has been to come up with relatively simple theories that explain how something develops or something happens. That goes in all sciences, from behavioral science to physics, where I have my background, but obviously also in etiology or in understanding diseases. Why do people become sick in the first place?

So there’s been a lot of work with using AI to predict who will and will not develop certain diseases, perhaps based on their entire genome but also on other omics kind of data. This work might be accurate in predicting if you have sufficient data about a patient population, but it doesn’t actually bring the scientist any understanding of process that leads to this disease in the first place.

So one thing that I have objected to for quite a while is that we tend to default to these kinds of black-box, predictive approaches in science these days, where we we deviate from the ideals of science to come up with theories that are justifiable in their own right: All right, let’s give up on the theories and just go for the predictions.

And to me, that’s not really science (in the purest sense at least).

So when you use our approach, or symbolic approaches in general, the aim is first and foremost to understand the process that leads to the disease. Often that will mean that you are actually bisecting the disease because many, many diseases are more than one disease, or there are many processes using a specific disease.

So understanding could involve things such as segmenting the patients into groups that actually have different etiologies or different reasons for being sick in the first place, which can in itself be super enlightening because they might then also need different kinds of treatments.

And at other times, it all just means seeping through this vast space of possible explanations. Like: If you think of genes combining in various ways, there are so many different ways that they could, in principle, combine to cause a disease. It’s difficult for a human to oversee those, but that doesn’t mean that the explanation isn’t simple. It just means that it’s a needle in a haystack. And we never found that needle.

So in all of those cases, the aim of the QLattice, and our approach, is to not go for predictive modeling, but instead go for explanatory modeling. And these explanations, of course, are also good predictors, because true explanations are super powerful predictors. That’s the fundamental idea of science.

Whereas the opposite doesn’t hold. You can have a very strong predictor that actually doesn’t have any predictive, explanatory power at all.

Andree:

What about in RNA therapeutics?

Using the QLattice for RNA therapeutics:

Casper:

So we’ve we’ve worked in RNA therapeutics for relatively arbitrary reasons. It’s fair to say, due to the people who ended up joining the company as as we grew, we worked in disease understanding and RNA therapeutics as a company.

In RNA therapeutics, it’s all about the molecular properties of a drug of some kind, typically a short oligo, which is like a short sequence of DNA or RNA. And what we’re interested in trying to understand is why these simple drugs have very strong properties, such as like stability or whether they bind to a target in RNA, for instance, or in messenger RNA, for example (depending a little bit on the actual modality of the RNA therapeutic).

But we’re specifically interested in drug properties: Toxicity, stability, efficacy. And these properties are themselves individual scientific problems. You can think about them as something you could, if you collected a massive number of amounts of data, use traditional machine learning to to study it.

But with our approach, you don’t actually need much data, sometimes even no data at all, because the QLattice will generate hypotheses about, “Why are these drugs toxic or stable?”, or “Why do they fold in a certain way?”, or”Why do they have these off target effects?”

So all things that are essentially scientific questions, but in having a scientific answer rather than a predictive answer, you can incorporate that knowledge into your entire R&D chain. So suddenly, if you know that a certain, say, shape in the three prime end of an RNA molecule has a certain tendency to produce some kind of toxicity, then that applies to any RNA molecule that has that property.

So science in the sense of theories has this power that is actually very general. So once you know that something is true, it’s true in many more situations than from where you actually discovered it.

So ultimately what that means is that our customers are building up corpuses of knowledge, of understanding, of the properties of  sequences of RNA or DNA. And that allows them to much more efficiently design molecules that in the first go, once you start going into the wet lab and testing them, have a much higher probability of of living up to the criteria for which you designed them.

For all this achieved with the fraction of the data that that you would have needed if you’d gone to machine learning or conventional AI route.

Andree:

So how much less data can you get away with?

Casper:

The interesting thing is you can actually get away with no data. And that’s a little bit in the limit.

But to maybe give you a concept about how that works: When humans do symbolic reasoning, we don’t actually need data to construct procedures. We construct our procedures based on prior understanding of the situation.

Like if I asked you how many cars are in a specific city, or how many airplanes take off from Copenhagen each day, you could, without actually knowing the number, you could devise a procedure that would quickly allow me to answer that question.

And then that procedure could be wrong, or it could be right, but you could generate it with no data. But then if the procedure is simple enough, then I could just try to understand it myself. And do I want to spend the time to actually test this procedure on real data? Then I can make that informed decision that I believe in this enough that I will collect maybe ten data points and then test your procedure on that.

But since the procedure did not come from the data, it came from your intuition, then data points can be enough to get a very high statistical confidence in the model in this procedure. This is very different from machine learning in the traditional sense.

So let’s say you have a disease where you only have ten patients, and you come up with a genetic explanation for this disease. Then those ten patients are sufficient to give very, very high accuracy of the model, because you’re using all those ten data points to actually test the procedure.

So this this is a very powerful idea in two senses: One is that you can work within the limit of no data. But obviously, ultimately, you need some data to validate a scientific theory, but not much. And the other is that in the loop, before you actually make the decision to go out and do the experiment, you can have a human domain expert who knows something about the topic, validating whether this procedure is is worth testing based on existing knowledge.

Andree:

It reminds me of many years ago when I was in neuroscience with one person’s brain. If you’ve got a double dissociation between particular pathways, it’s basically valid in everyone’s brain. So it’s very interesting. It’s kind of similar, which, of course, a lot of AI is similar to brain science in many ways isn’t it.

Casper:

Yes. But I think what you’re saying there is really a scientific claim, right? Quite often, a simple scientific theory can have a super large explanatory power. Not not because of some statistical fluke, but because it actually captures the underlying process that causes the phenomenon you’re studying.

So there’s a causal element here, that science actually ultimately relies on causality. For the simplicity that we find in many scientific rules.

Andree:

Now, without breaking any NDAs of your clients, could you maybe give some use cases of how you’ve applied this in disease understanding or drug discovery, in that kind of area?

Casper:

So the stories that are easier to talk about are actually the ones that happen before the actual drug drug development starts, like when you’re still at the at the disease understanding phase, where we work together with both public researchers and also large Pharma companies to identify… Well, ultimately, the target. The idea is to identify new targets for drugs, by understanding a disease better and the disease pathway. Then you can more specifically design drugs that intervene with those pathways.

We’ve worked in a lot of different disease areas, from neuroscience to inflammatory bowel disease to liver diseases of various kinds. So these specific disease cases are covered by NDAs. But we’ve been pretty broad in our approach to diseases, because our technology is not tailored for any specific disease area. It’s more like a intuitive scientist-in-a-box kind of approach. It can generate new theories that can collaborate with the scientists.

And then once you have these understandings (that are often delivered to the scientists on the customer-side, often we don’t necessarily even know about them from our side because we just deliver the technology or also help with the actual research — depends a little bit on the collaboration), the new disease understanding leads to more specific targets. And also sometimes more narrow targets to subpopulations where you can understand the biomarker.

What I’ve said in the beginning, that you get this break up of the the many different pathways leading to the disease. And that can then mean that you can now develop more specific, of course, also less general drugs — but that are much more efficacious against specific subpopulations in this disease population.

This is an area where about half of our Pharma collaborations are in, and the other half are very specifically often in toxicity. Actually, in antisense oligonucleotides, which is one approach, or small interfering RNAs or various other approaches, very often the kind of problems we look at are toxicity related.

More recently we worked with lipid nanoparticles, understanding the structures of the molecules in the nanoparticles themselves and understanding the properties based on those nanoparticles.

So in RNA therapeutics, for example (well, not only on a therapeutic, but also on vaccines), you often wrap the actual active drug in a lipid nanoparticle that plays some role either in delivery or stability of the of the ultimate drug. But this is so the nanoparticle itself is not the drug, it’s the delivery vehicle.

But again, here there’s a lot of scientific questions about why do they have these good or bad properties that that we find in them. This is a new area that we’ve gone into.

So generally, I think this is applicable very broadly. And I can also say that for independent researchers, we have a light version of our technology that you can actually use for free as long as it’s for noncommercial use.

So there’s a lot of independent researchers that have just adopted the QLattice for research in areas that we have no specific knowledge about, ranging from space science to environmental science, behavioral science, energy consumption, um, and many other use cases where we just see these papers, recent papers pop up, where our technology has been used to discover some new understanding of some phenomenon.

Andree:

That’s very cool. It’s nice seeing it happened without you having anything to do with. I mean, you’re doing the underlying part, but that’s very cool.

What about, you know, in the last year, we’ve had so much about generative AI. I mean, I know generative AI started in 2014, so it’s not particularly new. But because of ChatGPT, everyone’s talking about generative AI. Let’s let’s hope it pipes down soon.

But, how do you think that these developments are going to affect Pharma R&D?

Casper:

Well, I think there’s some hype.

So first of all, generative AI is really bad at generating new things. So interestingly enough, we call it “generative” AI because it generates things, but it doesn’t really generate things that weren’t there in the first place.

Casper:

Well, I think there’s some hype.

So first of all, generative AI is really bad at generating new things. So interestingly enough, we call it “generative” AI because it generates things, but it doesn’t really generate things that weren’t there in the first place.

And that is becoming clearer and clearer as as people are getting experience with it.

Sometimes it can come up with “new” ideas because you, the user have not heard about them, but it really actually never has. It’s not built with the capability to come up with new ideas, like in the innovative sense.

But that being said, generative AI, and language models maybe in particular, are a super powerful interface concept. The idea of a human being able to interact with a computer system in a much more intuitive way through generative AI is very powerful, and it’s going to change the way every industry works. It already has happened, but I think we need to keep our eyes on the fact that generative AI does not generate.

Is it able to actually accelerate scientific research in any meaningful sense? Yeah, no. But it can be the interface between the scientist and the tools that could accelerate science.

So what we’re working on today in Abzu is we’re actually wrapping our QLattice algorithm in a hybrid configuration where the generative part, the language models and other types of generative components, are the interface between a human user scientist, or maybe even a non scientist, and the symbolic part, the QLattice part, and the actual explanations that seems to solve some important productivity problems in the scientific research on drug development specifically.

Andree:

Yes, yes. And in medical affairs and areas like that, it’s great for summarizing the data.

But, you know, that’s out there, etc. What does the future look like, do you think, for Pharma AI applications?

Casper:

Well, I know most about the drug devlopment part. 

But I also we have some thoughts. And we’re also having use cases even in manufacturing and even in marketing. So again, this is super broadly applicable.

But in the drug development side, there is no doubt that the generative AI will enable many more people to benefit from technology as an interface. It will also ultimately be able to automate some of the work processes that we’re having humans do. So it will also displace some jobs, there’s no doubt about that, in the agent sense.

Andree:

I know from one of your recent podcasts as well that you’ve talked a lot about about agents. And agents are an important concept, like this long running idea of a workflow that’s being taken over from a automating a complete human workflow.

Casper:

Yeah. But in all those cases, whether it’s an agent doing the work or a human doing the work, they’re interacting with with an AI.

The AI, if it’s generative, does not really have the capabilities to do much. It needs to turn around and use some tools. And here our approach, our kind of AI tools, I think have a much bigger potential to really do something in science.

Not so much automate or make processes more efficient, but actually accelerate the rate of discovery in the sense that we can get a much clearer picture about why people become sick even in rare disease areas, and also have a much easier path to ultimate drug, including the the clinical trial phase.

Because if you have a very good understanding of why the drug works in certain way, that simplifies the clinical trial in many ways.

First of all, you can select the right patients. If you’re selecting based on a clear scientific understanding of the disease and how that matches with the drug that you’re testing, then you can make sure that you only select people who will actually respond to your drug. And yeah, that gives you much, much stronger signal in the clinical trial.

But there is also some structure about the way we do clinical trials that should be updated to accommodate for these new kinds of approaches, because there’s perhaps some conservatism in the requirements for how you set up a trial that doesn’t really allow you to learn that much along the way.

We’ve seen that quite often: When a clinical trial has actually failed and we get access to the data, and then we run the QLattice, and if you just done this instead… And you could see that already after maybe a couple of weeks, you could have seen that this population group was actually two population groups. And you needed to just focus on that. So redefine the trial.

Andree:

We’re seeing a lot of that with AI these days aren’t we. That we can actually get those kind of insights so much faster? Even while the trial is still running?

Casper:

Exactly. If the regulatory environment would accommodate for that, I think we could get more targeted, more efficacious, more safe drugs. But, of course, also narrower drugs, because it will end up targeting a narrower population.

Andree:

And also I think the FDA is changing. So in ’22, they actually changed their regulations, so that for originally animal testing and phase two human testing, you don’t actually need animals or humans anymore. So it’s being done with digital twin and synthetic data now. Not all the way through, but you know, it speeds up at least those first two phases, which is pretty interesting.

And while we’re on regulations, I was actually on a panel for the European Commission about the AI act. And so if we think about all of these new regulations that are coming in, both in the the EU as well as the US, what do you think Pharma companies need to look out for with these? Because there’s so many changes and they’re all up in the air at the moment.

Casper:

Starting with the AI Act, so the idea of classifying users of artificial intelligence into these risk categories, including the high risk category, is probably a sane approach because it it gives us like a reasonable approach when we do whatever with AI. And I have fully supported that actually, in all my public interactions as well.

There are aspects of the AI Act that I’m a little bit more, uh, concerned about. But this aspect is, of course, one that I actually support. It’s the sane and safe approach. It does mean that almost all of the many medical applications of AI have landed in this high-risk use case category. And that, of course, means that everybody working — whether it’s with diagnostics or drug development or clinical trials or prognostics or medical devices — needs to be aware both of what is required for high risk use cases, but also how to to then live up to those requirements.

There’s a couple of requirements that I find particularly interesting and important, also because they support what I’ve been saying since founding Abzu: If you base your decision system on massive amounts of data, it’s very difficult to ascertain that the data is not biased or contains bias.

It becomes actually impossible, in my opinion, to live up to at least the spirit of the AI Act if you have a decision system that is a black-box decision system that has been trained on, say, a million patients. Because who are they? What do these many factors in those patients actually interact with? What are the biases?

So that’s one problem where just the mere fact that symbolic AI, our kind of AI, allows you to limit the number of patients that you actually derive the decisions the decision system from — it makes your life a lot easier.

But the other aspect, which is less clear, and I mean, the final manifestation of the Act has not really come out. We haven’t seen the implementation yet. I believe there will be pretty strict requirements on being able to explain specifically how the model behaves. What does the model do to this population versus this population when you use it for diagnostic prognostic purposes?

And I think that is fair. We can’t really have a black-box oracle making life or death decisions for humans.

And then when the human turns around and asks, “Why?” The only answer we can give is, “Well, computer says so.” It doesn’t really fit well with my values at least.

So again here: Symbolic AI approaches are super powerful because you can explicitly and very detailedly answer that question. “The reason this drug is not prescribed to you is because you don’t have this specific mutation for which it’s designed for, period.” Or, “The reason we don’t give this drug to you is because you smoke.” If that’s why, then we need to be able to say that. Very specifically.

Andree:

That just reminded me of that case quite a few years ago where IBM Watson did a cancer treatment recommendation for oncologists. And the data was not good. And a lot of what they were recommending was actually unsafe for the patients, and it had to be pulled. So that backs up your argument about, you know, it’s been trained on a lot of data, but who are these people and what is that data and how accurate or unbiased is it?

Casper:

Yeah, I sometimes compare that approach to the Oracle of Delphi. Like imagine that we have a super smart oracle, but it’s like so much smarter than a human. And then you just input data and data and data and the oracle reads all that data. And then ultimately we have a system where we can go in and say, “Oh wise Oracle, answer this question for me”, and then it gives you an answer.

And when you say why, it says, “I have no idea. It’s not for you humans to understand this stuff.”

Maybe that’s fine if we’re using our AI to do, like, predictive marketing or whatever, but if it’s life or death decisions, it’s a high-risk use case.

Andree:

I really agree with the AI Act in that, in these high risk cases, it is not fine.

Casper:

We’ll create a biased, unfair society with arbitrary decisions and a lack of trust from humans in the technology because, like, if you don’t understand, how can we trust?

Andree:

I hear that again and again in Pharma. You know, it’s that trust element is so critical with AI as well.

Just one final thing on regulations: What do you think that the regulators need to keep in mind when they’re drafting the AI regulation to avoid hurting Pharma?

Casper:

Well, I think there’s two things I want to say about that. One is on the implementation side. The track record from GDPR is a little bit worrisome.

GDPR is not related here, but everybody knows about that as well. I think if you look at the intention of the regulation, it is hard to disagree with. It’s fair that people should know if their data is being used for various purposes, but the implementation ended up being super bureaucratic. It’s very difficult for a company to live up to all the requirements, not in not violating the principles of GDPR, but in actually following through all the procedures and routines and requirements along the route to using personal data.

So this this ultimately is based on a legal regulatory framework that does not trust the industry. So the reason you implement all that red tape in the process is because you don’t trust the industry to not break the law. And this is different from the more traditional approach in legislation where you say, “This is what you must and mustn’t do.” And if you do it anyway, we punish you. But it’s like an after-the-fact kind of thing.

And you can say, all right, then there’s a risk that the law is broken and that’s what we’re trying to avoid. But on the other hand, this idea that regulators need to inspect the very process in order to ensure that law is not ultimately broken, is has really hurt European industries with the GDPR really. And and it has also turned the public even against GDPR. And GDPR is not something we should be against as a as a principle.

The same thing could happen with AI, with the AI Act. Again, I have no complaints with the use case regulations and high risk versus moderate versus low versus no risk use cases and the requirements for those. But if we implement something where you have to follow through a long regulatory process with approvals along the way, it’s going to slow us so much down. And it’s all based on mistrust in the industry, not in AI, but in the industry. And this is something that worries me because if it happens, it’ll happen in some jurisdictions. It’ll happen in Europe, perhaps, but not necessarily other places.

Andree:

The FDA, again, did something quite cool in that respect for AI within medical devices.

So they had, you know, the 21st Century Cures Act a few years ago, probably about five years ago now. And in that, they look at the processes that the company are using to develop the AI, because they realize that they they can’t go back every week because AI is constantly being tweaked and improved. But they wanted to look at how the company is actually doing the processes, and then they approve it based on that without having to to change it. So I do see the FDA actually out of a lot of the regulators kind of doing some forward thinking things, which is quite nice.

Casper:

There’s certainly awareness in this from the regulatory side. Both in the US and also here in Europe; certainly awareness of it. So we’ll see. We’ll see where it lands. But there is a risk that it becomes very bureaucratic once it gets in.

Andree:

Yeah. When I was on that panel, the the guy from the European Commission said it would be another two years before they finalized the AI Act.

During the pandemic, AI was used a lot and we were much fasater. Although there are lots of concerns about safety because we didn’t do all the processes we have to normally do, but ultimately it was successful.

How can AI impact the development and the timeline of development to produce pharmaceuticals fast?

We’ve got, you know, 236 billion being wiped off the revenue of Pharma in the next few years from the patent cliff. So getting drugs through faster to market is a big interest to all Pharma boards at the moment. How do you think AI can affect that?

Casper:

So, first of all, the pandemic was actually easy because it immediately, sadly, but very quickly provide us with massive amounts of data.

Which made it a very good target for conventional black-box machine learning. Which also meant that the vaccines had a lot of tailwind from very quickly studying or very quickly predicting the properties of these long mRNA molecules.

Again, this was possible because we had so much data. We sacrificed the explainability. I have concerns about that. I mean, we probably had to act, but I have concerns about some of the situations, some of the decisions that were made, whether we actually knew enough about the molecular properties and how they interacted with the body to move fast.

But again, desperate times, desperate measures. The thing is, the ordinary picture for the drug industry is not pandemics. It’s not, I hope not, at least.

It’s like many, many small diseases. All the big blockbusters have been made.

The industry needs to look at and find ways to be super profitable on much narrower disease populations. And these diseases are difficult because, as I said, they’re often degenerate in the sense that there are many different diseases that we think of as one. Uh, cancer is the obvious example.

Like there’s no such thing as cancer. It’s so many different diseases.

So in order to do drugs in this, like where the easy fruits have been picked, we need to accept that we’re targeting narrow patient groups. Which in turn means that we need procedures that are really good at coming up with drugs based on very little data. And here AI, in the black box sense, doesn’t do much.

Andree:

Exactly. That makes a lot of sense. Because it’s basically, you know, at the moment without a lot of AI, it’s still costing, you know, 1 billion to 2 billion to get a drug to market.

And you’ve got such a small patient population, it means the drug prices go way up. And now we’ve had the Inflation Reduction Act last year, with loads of drugs still under patent have had their drug prices slashed. So we really need AI to speed that whole process up, which will decrease the costs of getting and time of getting a drug to market.

I don’t know what you’ve seen, but I’ve come across a lot that quote, 60 to 70% increase or decrease in the time to market. Is that what you’re seeing?

Casper:

Yeah, that’s probably where we are right now. We need to go lower than that.

Fortunately, it’s a win-win situation in the sense that that everybody has an interest here. There’s nobody who benefits from these massive fines and costs.

Societies have got governments that pay the bill, they want them lower. And they should be motivated to also look at the regulatory landscape for developing drugs to make the price of drug developments lower. If we can explain how that is done in a safe way.

The industry obviously needs new drugs, new patents. So although, yes, the prices that they can get when they finally succeed are massive, it still is both a volume problem and it’s also a it’s also a trust problem.

People are skeptical about the Pharma industry as such because of these high prices. And ultimately, the patients also will need the drugs, obviously. That’s why we’re all doing this, which is to give people better lives. So it’s a win win win.

So if we want the industry, the patients, and the regulators to trust each other, then this is the aim. We should be able to find ways. And I think from a technological perspective there are solutions. Um, again, moving away from black box because black box is probably not the solution generally.

I mean, I’m an optimist. Everybody knows that. I keep saying that about myself. So I believe we’ll fix this. Hopefully sooner rather than later.

Because there’s something like 8000 rare diseases, and we’ve only got, you know, around 100 and something that have got drugs for. So there’s thousands of diseases with people that are suffering that have no treatments whatsoever.

Andree:

Looking ahead, you know, what’s the envisaged direction for Abz? And how do you see it evolving to meet these changing industry demands?

Casper:

So the QLattice is too powerful to be restricted to just drug development.

It’s a technology that actually, as I said, we have people using it for space science. We have people using it for, um, environmental science. So we need to find routes where this technology can come into the hands of of people who need answers to these questions, even outside of the realm of science.

It can be like: How do you make better windmills? Or how do you better predict the maintenance of your equipment? Actually all sorts of use cases where you have little data need understanding. We need to make sure that our technology gets into the hands of the people who have those needs.

So for us as a company, we’re treading a balance between staying with our core customers and what we’re working on in drug development, but at the same time expanding out and getting this technology out. Typically, I think, through partners will be the main path into other industries.

So this is something that we focus a lot on. Also, because Abzu’s growth as a relatively small company depends on people understanding the vast potential of the QLattice. And getting that out there and all of these use cases as fast as possible.

Andree:

So it’s been so interesting talking to you. I’ve found this so fascinating, what you’ve done and the applications, and where it could go is incredible.

And I’m sure some of the listeners may want to get in touch with you. So for anyone listening who is interested in talking further to Caspar and his team, you can go to www.abzu.ai and see some of the things they’re doing.

Read about QLattice and get in touch. Thank you so much for your time today. Really, really fascinating conversation.

Casper:

Thank you so much for having me here. I really enjoyed it.

Share this podcast.

More podcasts about Abzu.

More podcasts about applications of our innovative technology, our work in drug discovery, and our unique team and culture.

Lyt til Casper Wilstrup, CEO i Abzu, der ser, at next step er, at AI går fra et hjælpemiddel til direkte at overtage det menneskelige arbejde.
A discussion on AI, science, and philosophy, and Casper explains the symbolic AI behind Abzu's proprietary QLattice®.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.