En verden af Information: Hvad er kunstig intelligens? Med Casper Skern-Wilstrup.

Om En Verden af Information: Kunstig intelligens (AI) skremmer, underholder og løser problemer. Men hvad er AI, hvordan virker den og hvad laver den i din svigermors mikroovn?

This podcast is available in Danish on Spotify and Apple Podcasts. The transcript has been translated into English and has minor edits to improve readability.

Om En Verden af Information:

Gennem vores forskellige udgangspunkter i henholdsvis humaniora og naturvidenskab undersøger og diskuterer vi forskellige samfundsfænomener, som klimakrisen, covid-19 eller flygtninge. Samtidig diskuterer vi også mere traditionelt naturvidenskabelige spørgsmål som; hvad driver evolutionen, hvad er liv og hvad er bevidstheden for en størrelse

Overordnet ønsker vi at finde ud af om vore forskellige indgangsvinkler er hinandens diametrale modsætninger eller om vi gennem vore diskussioner kan finde en fællesnævner, der kan tilbyde en større forståelse for forskellighed og lighed…

Abzu podcast - En verden af information - AI
Transcript

Part 1 - Interview with Casper Wilstrup.

Podcast Hosts (PH):

Welcome, and maybe even welcome back, to the World of Information podcast, where we explore how information shapes the world and how the world shapes information. Your hosts this time are, as always, philosopher and historian Thomas Mikkelsen. And then there’s the biologist and ocean scientist, which is me, Rasmus.

When we last had you around, we had a conversation with physicist Casper Wilstrup. We discussed what “existence” is, and we tried to relate a little to how existence may / may not arise out of nothing. Therefore, it is, I thought, really good that we have the opportunity to have another conversation with Casper about his work with artificial intelligence.

And as you said at one point, Rasmus, one can say that intelligence is something through which we understand reality. And this work on what intelligence is, through trying and creating an artificial intelligence might say something about ‘How do we understand the world and existence through.’ This is a perfect approach to a conversation with Casper again, but also from the point of view that artificial intelligence is in many ways highlighted as the solution to all the world’s great problems, but also as a huge threat to humanity. And at the same time, it may be something that most of us have a relatively small idea of ​​what it really is. So I think that getting an explanation for it from someone who mainly works with it, could at least be a good place to start.

Casper, maybe you can tell what it is you are dealing with in terms of artificial intelligence?

Casper:

Well, I’m the founder of a company called Abzu where we specifically research and develop a new method of artificial intelligence, which has a number of properties in relation to understandable or explainable AI, which makes it uniquely suitable for situations where it is important to understand the decision-making processes of the machines. We are a Danish-Spanish company with 27 employees, based in Copenhagen and Barcelona, with an office in Basel.

PH:

When we need to hear a little about your artificial intelligence, then it might be very good just to get an idea of ​​what it is first. And then another term that is often used with artificial intelligence is ‘machine learning’. So is there a difference between machine learning and artificial intelligence? Or are they just two words for the same thing?

Casper:

So machine learning is a subset of artificial intelligence, and it is very relevant and currently an incredibly important subset of artificial intelligence. The concept of artificial intelligence in itself is broader than the very fact that machines can learn. So the idea of ​​artificial intelligence, at least the artificial intelligence of modern research, actually starts in 1956, when a group of researchers centered around John McCarthy, met for a conference where they would deal with how to use these new computers, which back then was something completely new, to build systems that could solve some of the same tasks that people solved.

It’s also where the word artificial intelligence first appears, in a scientific context. It’s in the invitation to that summer’s conference, which McCarthy hosted in ‘56. It is still the definition which is the most common one for what artificial intelligence really is. It is simply research and work to build machines that can solve tasks that one would normally have expected a human being to solve.

PH:

So there is a very close connection, you could say, with the kind of intelligence that we feel we ourselves have as human beings, and then you put the word ‘artificially’ on to say that it is something fundamentally different.

But as Rasmus says, maybe it was an idea to try to dive a little into the research area that you have been working on for some years. Where are we in this research today, do we have an intelligence that is more than just artificial?

Casper:

As McCarthy himself, one of the founders of the field one might say, made a little mockery of his own choice of the word ‘artificial’. He needed to get a concept, but what they were after was not really an ‘artificial’ intelligence, but an intelligence.

And what then is intelligence? It is difficult to define and we can talk about it for a long time. But in a practical context, it is about helping humanity to perform tasks for us as a human being would do, and in that way, artificial intelligence is also a slightly diffuse thing.

Every time we invent a technology that can take over a piece of work that has previously required a human being, we tend to redefine it to no longer understand it as artificial intelligence. So that way one can say that just as soon as we have invented a solution to a problem in artificial intelligence research, then I do not think it is intelligence at all anymore, so it is an eternal hunt. However, there are still some things that are quite obviously difficult for computers to do today, and that the research in artificial intelligence is about getting them to do this.

PH:

So what you’re doing is using your computer to find patterns or answer questions. As a biologist, I often use statistical methods to find patterns. That is, purely predictive, so what is the stark difference between artificial intelligence and statistical methods?

Casper:

Now we’re moving specifically into the area called machine learning. When you say statistical methods, because that is one of the many tasks there is in making an artificial intelligence, is to get the machine to learn how things are related from observational data. And the traditional way of handling observations and try to bring meaning into them is by using one of a number of different statistical methods to try to see ‘How is this data related?’

So you basically try to find a predefined mathematical connection between the data you observed and the ones you want to preach. The problem with statistical methods is that the specific mathematical context allowed is very narrow. Thus, for example, a linear method: You assume that all the parameters that can affect your predictive target must have an independent linear effect on the target. And that is not always the case. You will very often have interaction with things, in different ways and in very complex ways, or there will be temporal evolution, etc., that no statistical method can capture because it has this fixed parametric form.

So machine learning is about expanding your search field and looking for mathematical models you can say that can relate to something you want to calculate with something that you want to calculate from. But now without imposing any particular, for example, linear constraints.

PH:

One of the concepts that most often comes up when talking about Facebook and Google etc, are algorithms. Is it really an algorithm that we are talking about? Are these the mathematical forms that you describe? Or is it something else?

Casper:

In that context, an algorithm is a way of looking for context. We all know about deep learning, for example, which is about having a data set where you want to find some connection between an input. It can be a picture, or a sound clip, it can be some health data from a patient and then a consequence. It could be that this is a bus, if it’s a picture, and there’s a certain phrase, if it’s sound, or this is a certain disease, if it’s health data.

Deep learning is an algorithm that searches through these data after deep, non-linear relationships between the things you take as input parameters in your model and what you want to predict. Algorithms in this context are more about the way we look for the context in the context itself. Because coherence is ultimately a function when you’re done finding your machine learning model, whether it’s deep learning or another statistical method, then you have a model in hand that you can put new observations into, and then it predicts what is the true answer from the data observation.

PH:

So what you’re saying is that there are, in fact, two forms of artificial intelligence: One where one finds the answer, and one where one finds a background for the answer.

Casper:

What is lost when using, for example, deep learning or other machine learning models, is that where the statistical method is easy to interpret. So while you are sitting and looking at results, it says, “Well the older the patient is, the more likely the patient is to develop some form of illness”. That kind of realization is lost in traditional machine learning. You end up with a model that can still predict the likelihood of the patient developing some disease. But how it arrives at that conclusion is completely opaque because the model itself is so complex that it consists, for example, of hundreds of thousands of parameters that you as a human being cannot understand. So you no longer have any possible ways to inspect what the model is actually doing to come up with his verdict.

PH:

Does that mean that you should not access the parameters that are in what gave the answer, and you can not go in and be told that it was, e.g., 0.01 times 0.02 plus something. There is not an opportunity to go in to find what gave you the answer?

Casper:

No. If you try to write it out as a mathematical model that a deep learning algorithm has come up with, then you will sit and look at an expression that would not be able to fit on an entire football field in small letters. It is a very, very long mathematical formula, which by means of an advanced method is ‘fitted’ such that it actually best suited the observations. But where the very individual parameters of the model are completely meaningless. I will not inspect the models and get any understanding of what this model actually does. And we are genuinely talking about that in modern deep learning models, where you can have a million parameters in the model.

PH:

But what you do is a little different?

Casper:

We believe that there are a number of issues where the central point is not to be good at predicting something, but to try to understand the underlying mechanisms. For example, a collected data set about some patients’ health, health conditions, where you can predict whether some drug is working on that patient. Then it is not always enough that you can say, “Mrs. Jensen, if you get this cancer medicine, then we do not think it works on you, so you do not get it.” And, “Mr. Hansen, it works on you so you can have it.”
The natural question such a patient will ask is, “Why? Why can I not get any medicine? Then I die from my cancer.” And then it is often not enough to say, “The computer says no”, and therefore there are a number of contexts where it is quite important to be able to explain why the model has reached this decision.

And that’s what we’re working on in Abzu. Trying to extract the simplest possible explanations from data that have the same ability to predict some things, but at the same time are so relatively simple that a human being can interpret and explain why the model has done so.

PH:

So is it a reduction of the original algorithm, or is it something completely different?

Casper:

It’s a completely different way to figure it out. So basically, we use some tools that have been developed to solve quantum field theoretical problems, to search through a very, very large number of mathematical forms at the same time. So you could say that we look through millions and millions of formulas and say which one of these formulas fits best on the data that has been collected. And then we presented the user with the formulas that best describe the data and which are at the same time as simple as it is something the users can decide for themselves.

PH:

But does that mean that the formulas that you like to come up with, they are interpretable for people, and you can understand what it does, which parameters it says are more important, and which is less important ? But how is the precision of the predictions that one gets with such a model versus a black-box model that can select 100 billion different input parameters in its calculation?

Casper:

It depends on the situation. So basically, there are a number of situations where there is no simple model based on input that can predict the answer, such as in computer vision. A typical input in computer vision is an image, and it may consist of millions of pixels, and then the model must decide if there is a bus in this image. There is no simple formula in 3 of the pixels in the image that can tell if there is a bus in this image. So in that situation, neural networks are pretty superior. But there are also situations where there is what can be called a causal connection.

PH:

Now that you say that neural networks can be superior. How do you understand neural networks in that context?

Casper:

So deep learning methods with the uninterpretable black-box models can be superior, because there you can actually take all pixels into the overall image at the same time. But in other situations there is a causal explanation.

Let’s say that some disease has a genetic origin. But it only expresses itself if you have a certain mutation and you have a certain behavior, let’s say you smoke and you do not have a certain other mutation. So now we have a model that can never become linear, so the statistical method will not find it because it is not the case that the more you smoke, the more ill you’ll become. There is a more complex relationship.

One must be different and the other too, but not the third. This kind of connection, deep learning or neural networks can find out, but in a completely incomprehensible way. So there you can not interpret what it is that is happening. This is where our technology has some potential properties. We can simply distill this relatively simple formula, which says: “If this and this, but not this happens, well then the risk of developing a given disease is high”.

PH:

And if we then have to try to relate to the original question of what artificial intelligence is and now we can talk about there being black box and white box and what we should now call it. All in all, how does it relate to our intelligence? Are any of these methods more reminiscent of human intelligence, or are all three significantly different?

Casper:

In other words, artificial intelligence is usually divided into two main groups, which has been the case ever since the field emerged in the 1950s. One is what is called symbolic artificial intelligence, and the other is sub-symbolic.

Artificial intelligence and the two methods also relate in their own way to the way the human brain works. If I now say to you two, I ask a question such as “How many red cars are there in Spain?” Then a process starts inside your head, where you start counting: How many people are there in Spain, and how many of them have cars, and what percentage of all cars are red? And then you can sort of reach a decision, and then you can say that there are probably about 600.000 red cars. And you can not in fact explain how you arrived at it, and that is the cognitive process, a so-called symbolic process.

If, on the other hand, I show a picture of a yellow school bus, then I am not doing a similar process. The moment you see the picture of the bus, you immediately conclude “Here is a yellow school bus.” You can rationalize and say that it is a school bus because it has wheels and a few windows, it is long, etc. But that is not how you arrive at the decision that it is a bus. It is a sub-symbolic process and it can take place unconsciously. So the two families of artificial intelligence methods, the symbolic and the sub-symbolic methods, roughly correspond to the unconscious processing that takes place in our brain on the one hand, and then the cognitive processing that also takes place in our brain, on the other side.

PH:

 And so that’s what Kahneman describes as System 1 and System 2?

Casper:

Kahneman would definitely have said that. Our brain is put together like this: Iif you look at the visual cortex in our head, then you can also see that there are the neurons simply organized in a way that makes them super suitable for making this massive parallel processing of incredibly many pixels, whereas in our neo cortex, the coupling is far more diffuse and broad-spectrum, so that it becomes better at making the symbolic reasonings that we understand as cognition. What has happened in the last 15 years with the latest major advances in artificial intelligence has all been the one in the sub-symbolic camp. So this about recognizing things in pictures and automatically calculating protein folds or other things where there are no simple connections.

PH:

That was what you called deep learning before, right?

Casper:

Yes, exactly: Deep learning and neural networks. These are two sides of the same coin.

PH:

But is there anyone other than you who is embarking on this symbolism?

Casper:

Yes, so symbolism with artificial intelligence has been the holy grail ever since the field came into being, and a lot of research has been done into it. I think we have a new, quite unique take on how to get some distance, and we have also shown that we can deliver some results that have very convincing predictive power, but at the same time can deliver some simple mathematical models, so that a researcher working with the system can even understand what the actual model says. Of course, this is only in some cases. These are the cases where there is a real causal connection in the underlying data.

PH:

So is one approaching an understanding? Do you think more about what intelligence really is when you say that there are always these two different kinds, and now you have worked, also artificially, with both of them – is intelligence, then a mixture of the two?

Casper:

Yes and a few more. So intelligence in itself is a relatively vague concept. But the processes that take place inside the human brain, that we can roughly think of as thinking, they both have these two elements. And again, if I go back to the example where I showed you a picture of a bus, they would very quickly, without any cognitive process whatsoever, be able to tell me that it was a bus that was in the picture. If I then asked you why it was a bus, then they would not start saying that because that pixel is yellow, and because that pixel is yellow and because that pixel is yellow. You will be able to post-rationalize an explanation of what it is actually that makes this a bus. And the post-rationalization, that’s what we look at as cognition.

So it’s pretty clear that the two things are interacting in our brain.

PH:

When you have these two different AI schools or artificial intelligence schools, while we have a brain that looks like it does both at the same time, someone is trying to combine the two.

Casper:

Yeah, and there has not really been that much progress in this approach. There is something about the way the sub-symbolic methods work that makes it difficult to integrate them with other methods. Man has simply made some shortcuts in the way one calculates the parameters of the neural networks, which makes it difficult subsequently to connect a system that works in a fundamentally different way. So it’s not really something that has come out, that one has come so far down.

PH:

Yes, I think immediately as a layman, and it’s probably because I do not understand it so well, but that one would be able to apply the black-box mythology with an extra layer, understood in the way that then you could take your black box’s long, long, long formula, and then you could use black box one more time, to try to see if we can find some common denominators that would then for some slightly smaller forms.

Casper:

And that’s a good idea, too, good thinking. And that’s actually also what you’re trying to do, where you can say you have a pre-burner, a black-box neural network, which delivers some intermediate calculations on to a more cognitive-like layer. The problem with that method is that it’s not how the brain works either. There is a deep bidirectional coupling between the cognitive systems, the brain and the visual system of the brain, so our neocortex has its trunk down; Not just on the output from the visual cortex, but everywhere in the visual card. You send signals back to the visual cortex about “hey, maybe you should just focus more on the pixels above”. So there is a deep bidirectionality in that collaboration. Today you can not do it because the neural networks or the soft methods we have developed some highly parallelized general processing units, where you can not connect.

PH:

So it only goes one way, can you say when to try this, where the brain works much more in a process?

Casper:

Yes, there are simply some technological limitations that have made it impossible to do everything. It will come. Now I said that there are a few things that need to be done before you get something reminiscent of human intelligence out of a computer. There’s something about the experience – the fact that we can interact with our world is an incredibly powerful learning system, so it’s hard to really imagine an intelligence arising in a computer unless it somehow has a body, which can interact with the world and the whole field of research called reinforcement learning is also something that has yielded interesting results over time. But again a bit disconnected from the other methods. So I think at least the best bid for relatively general artificial intelligence that I can come up with, was a fusion of sub-symbolic neural networks, and we saw reinforcement learning methods and then the symbolic methods, like the method which I work with in Abzu.

PH:

And if you then look at intelligence in the human sense, then you often separate intelligence and emotions, and you see it as two completely different things. Do you think so too? Or are they in fact, much more interconnected and two sides of the same coin? And what does that really mean for the understanding of intelligence and artificial intelligence?

Casper:

Yeah, so I do not think emotions can be disconnected from intelligence, because intelligence is ultimately about trying to manipulate the world around us, and our desire to manipulate the world around us, stems from our emotions. Yes, I can not quite imagine that one can build an intelligence that has no emotions. Or at least simulated emotions. But time will tell.

PH:

But this with the different types of AI, the way I’ve heard you talk about them, they’re actually essentially different, but they can actually do different things, so it’s not like you can say that one is better than the other. They just do different things. And you can say that you can easily imagine that you have a question that you want to ask. And one would have liked an Abzu answer; an absolute answer on how to arrive at the answer. But if there is no causal connection that one can find in a sensible way, then it may be that one cannot actually come up with the answer. You have to find that “computer says no” is the best we can do, and there we are then in a different type of AI.

Casper:

That’s definitely true. What I think has happened is that we have made such great strides in the sub-symbolic field that we think there are a number of issues that we can solve with neural networks that are so complicated, that they will not be able to be solved symbolically, but where we are in fact wrong, where there are simple, symbolic explanations.

I usually use a metaphor or example that comes from astronomy, where Tycho Brahe, Danish astronomer who made a lot of observations of Mars’ orbit around the Sun. That data ended up in the hands of his student Johannes Kepler, who looked at that data for a really long time. And then he came up with some relatively simple, elegant mathematical formulas, which we know today as Kepler’s laws, which show these elliptical orbits in which the planets move. It was difficult. It took quite a bit of brainpower from Johannes Kepler to see that that was what happened in the data here. If Johannes Kepler had had a neural network, then he might have been tempted to take Tycho Brahe’s data and then fit his neural network on it, and then he would be able to arrive at a million-parameter long model, which was quite excellent to predict Mars’ orbit, and then he might have settled for it. It’s incredibly complicated, Mars’ orbit is completely unpredictable, but using this very advanced neural network, I have created a model that can still predict how Mars will return. It had not been so good. Kepler’s laws were what inspired Newton to make his theories of universal gravitation, which ultimately ended up starting the modern scientific era and industrial revolution.

So one can say that science is layers of understanding built on top of each other. I basically think that we as humanity, and as scientists perhaps in particular, should be careful with these black box models. And not be put off by the fact that it is enough to find a black box model, because I think we are missing some opportunities to actually advance our scientific understanding of the world around us. Because we’re giving up too fast. We have, as it were, reached the first goal, namely, to predict something, but the understanding; we easily skip it.

PH:

But as an ordinary common researcher, who then sits and has a lot of observations, for example a lot of numbers on for example where we have fish farms, how many fish there are in each farm, how many of a randomly selected parasite, for example salmon lice that are on each of these fish. If you want to try to use your model to say “what is really the best predictor for example infection pressure?”

Casper:

First and foremost, make sure your data is in a state where machine learning makes sense at all. Here we are talking about supervised machine learning, and that means that if you have it as a table, tabular data, well then you are ready too.

Then you have a series of observations, where each series represents an observation that you would like to predict, then you are really in goal. Then you can easily apply machine learning, any machine learning method to the data, including ours. I think you should just give it a try, because we actually have a full, free system available to all researchers, as long as it is for non-commercial purposes, then you can freely use our technology to analyze that kind of data, and then you can see for yourself. It may be that the system comes and says that the best model it can find, it is a linear model, reminiscent of the one you would have found if you had made a linear regression on the same. But it could also be that it comes and says that there is an interaction model where these parameters interact in a way that you might not have anticipated, and then there is new knowledge.

PH:

Can I just pull it over in a philosophical direction again. I’m a bit stuck in this division we made before, with intelligence and emotions, and I’m thinking of another element that may be connected here, and which we also relate to as human beings. It is, after all, that we value our knowledge, in one way or another. I also think in artificial intelligence research, you work with actually looking at some of the answers that come out of both Black Box and White Box technology. How is the computer able to say that there is an answer that might be better if there are more answers?

Casper:

Are we talking about better in some form of ethical, moral sense or just better in terms of more accurate description of your data?

PH:

Yes, but in reality it is both, I see it as two sides of the same coin. But I think in principle, when you make a calculation and formula, then you have the individual answer in relation to another answer; that’s just fine. That is not the case. For us as humans, when we are in the world, there are some answers that we think are better and more important than others. So this is about being able to relate to the amount of answers in relation to each other. And especially when maybe more answers come out of it. Now we talked about the cancer patient before – Yes, but even though there are 99 answers that say no, we were more interested in the one answer that says yes. So how do you weigh all of these in relation to each other? Is artificial intelligence research also working to try to value the answers?

Casper:

Yes, there are relatively simple methods to deal with; the technical term in that situation is a binary classification so you are interested to know if you develop cancer or not. What you’re really interested in is the probability. So there you will try to make sure that the model becomes as good as possible, to correctly say what the probability is that you develop this disease here. It is a relatively common practice after processing to say ‘Well, where do you want to put your limit then? What kind of risk do you want to accept? ‘ So if you say that there is a 10 percent probability that you will develop cancer, should you start chemotherapy or if it says 20 or 30 or 100? There will also be different models that are better in different spectra of the probability space. So there will be some models that are simply better at correctly catching those who develop cancer, but in turn also over classifying, thus giving a lot of people a false diagnosis. It is simply the task of the researchers to choose a model that correctly balances these things against each other.

PH:

Now you said before that if you had your data, tabulated in a sensible, structured way, then you were more or less ready to go. Some of what for me at least helps to characterize us as a species and perhaps even on some points as a more intelligent species, it is this that we can go into issues with datasets that we really have no intimate knowledge of and began to see structures? So when you have an artificial intelligence that makes correctly tabulated data input, it is the case that you must have the data input structured in such a way, because then it will never get used to surpassing our structuring of the data that we put into the first place.

Casper:

That’s right, and that’s why machine learning is not artificial intelligence. Machine Learning presupposes that there is a completely and utterly organic intelligence that has facilitated everything for the technology, and the intelligence lies in reality with the human being, who both gets the idea to collect data to format it in a special way and to present them to the relevant machine learning algorithm. So machine learning is a building block, it is the learning itself in an artificial intelligence, but it is definitely not artificial intelligence in itself. There are really many other things an intelligent creature can do that overall enable it to act in the world that machine learning does not point out and address at all.

PH:

And one of the things I think is also the right that we are in the world with. We set goals, and that’s probably one of the things that an artificial intelligence has a bit of a hard time with.

Casper:

We probably won’t get around the emotions here. So, it’s a little bit hard for me to imagine an artificial intelligence that took the initiative for anything on its own, unless it had some kind of purpose-driven motivation and purpose motivation is usually what we call a feeling. And then, of course, we can code those emotions in. But when making a system that needs to practice playing chess. Well, in fact, some reward mechanisms have been coded into the system, which means that the better the game performs in chess, the happier it will be. And there we have created a kind of simulated, very simple but simulated feeling in the system, which makes it know what it is striving for. But it also gets very singular that way. If you want a system that has to go out and act for good or bad in the world, well then you will have to code some motivations into its system.

PH:

And not just be that you have to code it in – at least if it’s going to look like a human – then some of what we’ve discussed until that characterizes a human is that its action patterns are not predictable, but recognizable. So you have an idea of ​​about what it will do, but not exactly what it will do. That is, you can not have a static reward system if you have to have something similar to human motivation, right?

Casper:

 No, so I think in the end, we’re over towards something conscious-philosophical here. What is it really that motivates us to do the things we do in the world and wants to achieve the things we achieve individually and together. And then we are very, very far, both philosophically to understand what we are at all for some quantities and therefore of course even further from being able to code the kind of mechanisms into a system that we create.

PH:

We discussed the interaction between different sizes a little earlier today. You talked about Reward or reward mechanisms before, and you talked earlier today about that if you had three independent variables that affect each other, then there would actually be an infinite number of possible outcomes. So you could imagine that if there were encoded 3 different reward reward systems, well then you could actually say that the singularity that you talked about before, it was gone in if there are three different reward systems.

Casper:

That’s an interesting thought. The phenomenon you highlight is sensitive addiction. Especially one thing that is on the fact that even as soon as there are 3 objects in a system, then its long-term development is in fact predictable. Except for a few specific cases. Yes, that’s interesting. We humans have a number of basic needs that we are motivated by, and perhaps one could achieve the same by building competing motivations into an artificial intelligence.

PH:

That is, we are approaching where we can define the boundary conditions for when we should start worrying about meeting a terminator on every street corner. But maybe it’s a little further in time, as I hear you put it, Casper?

Casper:

Yeah, yeah, well, I guess I do not belong to the school that thinks we’re going to be surprised by intelligent machines just yet. And I think there’s a lot of hype around this, and AI development over the last 70 years has gone through a number of summers and winters. I do not think we are on the verge of designing an intelligence that surpasses our own, and I think many of the results that are hyped as if we are close to it, in fact, when you look down in the engine compartment, can see a lively reality going on, which is incredibly simple and incredibly human controlled. This fusion of technologies and methods that are needed to be able to arrive at a real general intelligence that may just look like our own, but it is not in the maps at all yet.

PH:

Very funny that you say ‘not at all in the cards yet’. One day on the train, I drove past my grandparents’ farm and sat thinking that my grandfather, actually just 15-20 years before I was born, used horses and plows when plowing the fields there. So even though you can say it is not in the cards at all, so what are we sitting in today? I do not think it was seen as something that was in the cards just a man’s age ago. So it is. While much may seem a bit speculative, it might be worth keeping in mind, somehow. It is the case with all technology that is developed that they can be used and they can be abused. But when you develop a technology like the one you do, where you can actually get something out, so you can preach things and then you can tell someone what it takes to influence an outcome. And it’s a whole new way to approach a problem span. Is that one of your thoughts you have in the back of your mind? So, is this something that can be abused?

Casper:

Yeah, so I think the danger that we should be afraid of by artificial intelligence is not about our AI overlords, it’s about ourselves. Artificial intelligence is, like so much else humanity has developed over time, a tool in the hands of ourselves. It makes us more powerful and better at doing things, and if we want to do bad things, well then of course it’s an unpleasant technology. So far down the road, it’s a regulatory issue. What will we really accept that this technology is used for? I have strong attitudes there. I think a lot of surveillance and self-learning robots, we simply have to say in a regulatory way that it’s just not something we use this technology for. But those are decisions we make. So technology is just a tool in our hands.

PH:

But when you say regulatory issues, is it also a bit of a self-regulatory issue? And the reason I ask about it is that I took note when we visited Abzu’s website and saw that you have a philosophy that you want to make your algorithm as accessible as you can. Is it from a democratization mindset and a mindset that it should not be exclusively available to some groups, who will then be able to use them for things they find unpleasant.

Casper:

Yes, so one of the really unfortunate consequences that can be of a new technology is if it is in the hands of a few. Technologies that are in the hands of many are at least fair. Yes, I basically think that there are three issues with artificial intelligence that it is important to address, and one is this with super intelligence, which in principle is an existential threat to humanity, and there I think is quite, quite principally that it is greatly exaggerated. I even think there are some opinion makers who have an independent interest in promoting themselves by exaggerating this risk. I basically think that by far most of what has happened in artificial intelligence over the last 10-15 years is a further development of statistical methods and can do the same things as statistical methods, namely predicting some outcome , based on some observations, and that in itself is and will be just a tool in the hands of some wise people. So I simply do not share that concern.

The other issue is, in fact, that I sometimes think that these statistical or super-statistical methods like black-box machine learning have a somewhat stupefying effect on us as scientists and scientists. We simply give up understanding too soon because we no longer need it. We can set these super powerful patterns of recognition engines to recognize some patterns that we then cannot understand. And then we can otherwise predict the things we predict and then stop here.

PH:

It’s like when we have a GPS, we forget to navigate ourselves.

Casper:

Exactly. So if you stop looking for the scientific explanations for the observations you have made, then because you can still just predict what will happen via the technologies here, well then we actually stop researching. I think that is a sad development. I think I see it very much in biology or life science that people simply give up. It is too complicated to understand why these molecules are harmful to the liver. But I can make a model that can predict it. But it often turns out that there are some good and easy and understandable explanations for why these molecules ruin people’s lives. That explanation is important.

PH:

As a philosopher, I might even say that it makes us as humans stop wondering. It’s part of what makes us human, so it actually removes us from being human. But that is another philosophical discussion.

Casper:

But I think that’s exactly right, and the third element is that these tools, like all these new technologies do, enable some things that were not possible before.

So today, for example, it is possible to use face recognition in pictures. So any intelligence around the world can build a perfect profile of all people on Facebook and all other social networks with names and stories about everything they’ve done professionally and what else they want to share, in a database and started to do data mining on it and build some relatively detailed profiling of us. I do not think we should be surprised if there are 7.3 billion personal profiles somewhere in the Chinese intelligence database, one for every human being on this planet here. That kind of power I think we should be afraid of. It is not the technology, it is the application of the technology. We can not get the problem to go away by trying to put the technology back in the bottle. It is out, but we need to understand that it exists and what risks are associated with it and help ourselves by making rules for what we will actually allow our technology to be used for.

PH:

Is it about democratizing artificial intelligence or what do you think?

Casper:

Yes. But also an acceptance that there is nothing you must not research, but there are things you must not do.

PH:

I simply think we’ll end it here and say Thank you so much for your time, Casper.

Casper:

It was a pleasure.

Part 2 - Discussion about the interview.

Thomas:

Well, Rasmus. What do you think? Did you get a better understanding of what artificial intelligence and intelligence is after talking to Casper?

Rasmus:

I got a better understanding of what artificial intelligence is. Not quite sure I became much wiser about what intelligence really is. We’ve talked a lot about machine learning, and then we’ve talked a little bit about machine learning being used as part of an artificial intelligence, where artificial intelligence is defined more based on what it does than what it is. And that may be why one does not quite get the same concrete idea of ​​what artificial intelligence really is for a size.

Thomas:

Yes, but I can agree with you. The first thing I probably notice, in terms of understanding artificial intelligence, is this division between two such quite different forms of artificial intelligence. One who is able to predict what will happen in a given future based on a processing of some data and the other who is actually trying to understand the underlying reason why the outcome is such. It’s quite interesting to see the two differences.

Rasmus:

The two are not really both artificial intelligence. But machine-learning processes. And there you can say that it was very, very exciting and interesting. Perhaps especially for someone who works a lot with data himself, because he got to set up the difference between the symbolic and sub-symbolic machine learning processes so clearly, then it becomes very clear what you really wanted to use, because it is clear that if you can use a symbolic machine learning process, which thus gives you an understandable expression of a context, then it will be clear in terms of knowledge, rather than a sub-symbolic machine learning process, which gives you some incomprehensible formulas, but which helps you to be able to preach something. So I think that you really should always have an approach to your data sets, where you start by seeing if we can find a pattern that we can analyze symbolically. That is, where you get a form of distillation of causal effects in your data set. So not just pattern recognition, because pattern recognition is also when you analyze, for example, images to find a bus. I find the pattern after the bus, while what I am really as a professional, ie as a biologist interested in knowing, is ‘What is the effect, for example, of a treatment of increased temperature or radiation, or what do we have here’, and there is I want to look at the causal effect – that is, if I do such and such, what is the effect then. Yes, there one wants to get an understandable equation out which says that when you take your salmon lice, for example, and expose them to higher temperatures and the radiation and other types of stress, what happens then for example with let’s say their transmission activity? There I want an expression that tells me the temperature. it did not do a shit, but when you came with lots of fresh water, then you got the transmission activated, and if you also irradiated it, then it could get even worse, then the transmission went completely crazy. There I want to get some kind of quantitative model, which tells me about what is most important, and I would be able to get that with the symbolic, that is, with Abzu’s approach, but not if I used the neural network. There I just want to get an equation out that could predict what the effect is.

Thomas:

It’s quite interesting that you say it, precisely because it in a different way acquires an instrumental meaning for you as a professional. The difference between these two forms of machine learning, and as you say, we might later start discussing what that means. For what is the difference between machine learning and artificial intelligence and intelligence? It will be such a smooth transition, but because we have stuck to it with the instrumental and this understanding, I think there is a super important element in relation to what we started discussing. What does this mean for us as humans and our understanding of the world? That is, that there is one, or that we are looking for, a causal context, and that we want to understand the world through causal contexts. What does it say about us as human beings that somewhere it is our starting point? So does that mean the world is causal, or is it something we make it? Because we want the world to be understandable.

Rasmus:

One thing is whether it should be understandable. Something completely different is that when the premise for us and for our existence is that we have tried to understand the world as causal because it makes us have been able to manipulate it.

Thomas:

Yes, so I think it’s probably deeply ingrained in the way we as humans perceive the world. That we are able to predict that when we do A, then B happens. We plan agriculture through seasons and know how we want to build cities or machines, and this is based on some understanding of causal context. So in a way, we can say that we have enough together as existence, a very strong causal focus. And we get hyper tired of it when we are then unable to predict both the weather tomorrow and how the economy will develop next year and all these different things.

Rasmus:

There, it’s actually a bit interesting, what Casper pointed out with that in the work with machine learning has had far more success in relation to these than sub-symbolic learning efforts one has made on the symbolic, because that we as an organism are dependent on understanding causal mechanisms. Our whole focus should have been on trying to develop symbolic machine learning, not sub-symbolic. It is rather strange that this is where the greatest progress has been, because it is not really the tool that, in relation to how we relate to the world, should have focused on.

Thomas:

But that’s not the way our own intelligence works. So because we are just more cognitive, you could say, although some might argue that we are not, because we also in many ways act on learned patterns ourselves and use what you yourself mention in the conversation with Casper, System 1 and System 2. We use this auto response in reality. There is a recognized pattern, and then we act exactly as we have learned.

Rasmus:

But can I take it a step further in the last conversation we had with Casper? Something we quite briefly also just touched on, is free will. And that is a topic we often come back to. It strikes me again here that this search for causality – again we approach this topic. If we are always looking for causality, then there is always an explanation that makes us end up in a certain place. And then we slightly undermine our free will or what?

Thomas:

Yes, I think it does. So if there is a strict causality, then I think we are pointing towards the terministic universe, which, I think, will indicate that we may not have had so much free will.

Rasmus:

We discussed this with Casper, also in our last podcast, so I think that in that mindset there is something that actually has some form of trying to understand the world through something that should indicate that we do not have a free will, while through everything we do, we have a very obvious perception that we have a free will. It’s such a little fun. That’s utopia, maybe. And even more so when Casper finally concludes here in our conversation mentioning our fear of super intelligence. It is perhaps just a super intelligence that is able in some way or another than ourselves to be able to anticipate every single trait that we humans have. Is that what we are afraid of, that there is someone who is able to both manipulate us and understand us much better than we ourselves are.

Thomas:

The thing about being able to understand and predict us is probably not what we necessarily fear. In these machine learning processes, there is no will. That is, even if it were to be able to predict what we would do, then it cannot be embedded in a machine learning process that has a meaning or a perception of everything we do.

Rasmus:

It does in those who program, you could say. The algorithm behind it can have it. Or as Casper said, it’s the researcher’s responsibility to choose the algorithm that provides the type of response they need, and there you can say that there is all the intelligence in the operators and not in the software that you use to analyze the data. Here one may ask, what exactly does it take for such an artificial intelligence to be able to approximately have a free will? It’s only then that I think there is something really to worry about, and then I think it might be a very good place to address Casper’s 3 concerns that one should have about artificial intelligence.

Thomas:

I think that’s absolutely right, and that was really what I was trying to get into in terms of understanding; What is it? First of all, he’s talking about super intelligence or AI Overlord, and that’s what I’m trying to ask, is that really what we’re afraid of? Is it because we do not really understand it? Or is it because we have a concrete fear? Many of us may be a little worried about China and think, ‘Here’s a Big Brother who’s suddenly able to predict how simple a move is.’ We have all seen Minority Report. It is not an inconceivable scenario that one can be checked before we even know where we are going.

Rasmus:

It’s one side and the other side that I really find particularly interesting is that the idea of ​​getting a scenario where you thought it was really, really scary to have a computer that is capable to predict and manipulate the world in a way that we are not able to control ourselves, that is, where we lose control. But we do not ask ourselves whether it is not at least as scary if the CIA or the Chinese intelligence or whatever gets access to a machine learning process that they can use to extract enough information that they are able to manipulate us as effectively as ‘SkyNet’ would have been.

Thomas:

You could say that. That is Casper’s third concern. This part of it is abused, and I think it’s a really important dichotomy to say that the super intelligence itself can in fact also, if you have to say, be a god. One who is both much wiser and more skilled and more considerate than we are as human beings. And therefore we should perhaps fear ourselves more than we should fear super intelligence.

Rasmus:

I think that having an idea of ​​what a super intelligence should be, I think it’s a bit stillborn. That is, if we look at how evolution in general takes place. The new species that emerge, we are not usually able to preach them, but sometimes they have some gradual transitions, where they get a little bigger beak or something. There may be some logic in that, but the really big shifts, those you have not been able to preach?

Thomas:

There is a paradigm shift in evolution. I would really like to say that if you suddenly get a digital intelligence or a digital ‘something that can be included as part of intelligence’, that is the second option, to say that it is not certain that it should be the supercomputer, or a super AI that takes over. It can just as well be a symbiosis between a computer that is able to interact with, for example, some people, and together they constitute a power factor that is incredibly strong. There you get some form of synthetic enhancement from those who have access to it.

Rasmus:

This with AI Overlord or the synthetic intelligence, ie a computer that is able to outsmart us all, Casper concluded very clearly that he thought it was so far into the future that it worried him not really that much about…

Thomas:

There was no need to worry so much about that. Also because we are far from understanding what intelligence as such is and therefore what we are building now is more machine learning than it is anything else. We reached quite briefly about this with timeliness, that is, that part of intelligence and what may become consciousness, is also to get one rightness into intelligence, that is, that one wants to achieve something. We also talked quite briefly about reward systems, i.e. reward systems, which are used, among other things, to teach deep learning computers to play chess. Then you build some simple mechanism that there is something that is better than something else. But what one may not have been so aware of is to try to examine what do these reward mechanisms mean then in terms of developing the right? Can one put in several different reward mechanisms, and then give it a right, which is suddenly much harder to preach, but still in relation to your definition, Rasmus, of what intelligences are, there is a kind of predictability, but it is still not entirely predictable. And are there really different reward mechanisms working together that can create that kind of intelligence?

Rasmus:

I think it can at least be something that helps to control the balance in behavior of living organisms for example, and as such I clearly believe that it is an important place to start, and we have actually made a model for how an ‘I’ can be constituted and shaped. And I think that model is conceptually quite different from the model that lies beneath what I mechanically make myself. In itself, it is not certain that our models are correct, but if that model actually turns out to be correct, then it may give us a clue as to what it takes to generate an ‘I’ , which has something reminiscent of a free will. So at least have something that can generate a desire to do something rather than something else. And it may be a kind of break-bar, which makes you suddenly go from saying that we do not understand consciousness, to saying that here you actually have something that can be programmed and that can be modeled, and which may prove to be a form of quantum leap, a form of paradigm shift in understanding consciousness. And if there is such a paradigm shift, then we can get to the point where Casper’s thoughts that there was not much to worry about are no longer entirely valid because one has suddenly found out what it really was, it took to generate one’s own will.

Thomas:

And that’s typically how science develops; not so linear but just in some big leaps that can just suddenly cause something. And then we are suddenly inside some big and important ethical considerations about how we relate to it with building what can become an overlord and another point, which you also just mentioned, it’s back to our idea around what evolution is and there is another important element here. That it may be the first time that we are able to develop a form of life that is fundamentally different from the form of life that is based on carbon, ie all of us organisms. That through thinking consciousness we have thought what our intelligence is based on can make it in a completely different form. And then evolution has jumped from one compartment to another.

Rasmus:

It has it, and it’s very funny that in all our future scenarios or in all our future narratives we are always portrayed as if suddenly an intelligence comes that annihilates us. One does not take for granted that it is in fact a further development of our intelligence, as one might say that it does not kill us, it is part of our continuity into eternity, so it is part of our evolutionary remnants.

Thomas:

Yes, that way man becomes just a step in the right direction. But I’d like to go back to Casper’s AI overlord. I think an AI Overlord could just as well be an incredibly good machine learning process that enables them to manipulate me into doing exactly what they want me to do. I do not have to have a right in itself, but then the right may come from those who are then the addict.

Rasmus:

Casper then thought about the way to approach this, or avoid it – or, I do not think I want to say to avoid, because I do not think it can be avoided, that it is abused, and I think I will relate to it. So the way we can relate to it is through regulating use. What do you think about such a mindset?

Thomas:

Casper himself mentions that democratization and openness around this may be a first step in securing the kind of regulation that is essential.

Rasmus:

But is it not also power regulation, ie a regulation that just gives many people the same power that you can see, if you had translated it into a war terminology, then it would be the same as saying that to prevent the misuse of nuclear weapons, we give all people nuclear weapons. After all, there is also something about trying to set the different scenarios as I look them up against each other and say that a few are able to control many by having access to a particular device. Whether it’s an artificial intelligence or an atomic bomb, it provides a scenario. If we say that we are not able to avoid a kind of spread and development, then it is better that we say, then we should all all be able to really understand and develop and work with this so that it does not become a remedy in the hands of the few. But I agree with you that it is not just to say that one is necessarily better than the other.

Thomas:

I think Casper might be right that that’s the only thing we can do. He did not say what he said we had to regulate. I think regulation is probably the only thing we can do. But it can be said that the regulation of atomic bombs, for example, has gone the opposite way of democratization. It has been said that this, we hold to the few. What I find interesting, especially in the talk about democratization, is to say. If we now, when we talk about artificial intelligence, you say that there will have to be a general openness. The discussion is also now about the meta universe. Should Facebook present their results, what they use? It’s a requirement from the EU, for example, and it’s kind of the same discussion. And then I imagine that the best way to deal with abuse or prevent the abuse of artificial intelligence is through conscious and sensible use. It would of course vary depending on the point of view you see it from, but there it is a bit of a crossword puzzle, what Casper points out with that many of these machine learning algorithms can almost be a bit stupid in themselves. They do not help us become more educated and enlightened organisms, but actually make us just capable of manipulating without understanding what we are doing. This does not guarantee that we will have a wiser approach to artificial intelligence.

Rasmus:

That was his second warning. I think we’ll probably all experience it in reality. My own kids too, in terms of using social media and the way one gets controlled somewhere, to get back to the same hats pictures every single time one has looked at them. It gives some kind of that satisfaction, again back to our reward mechanisms from before. Once the algorithm has figured out what gives us joy in this life, then it just keeps giving it to us, and so we will be happy. Then we are held in some indifference in reality to anything in the world other than that which gives us the immediate happiness satisfaction.

Thomas:

But if you then had to try to sum up, too much of this gets a little high-brow and a little overall, that ‘well then you have to behave like that, and we have to regulate, and we have to reform ourselves’ But quite so concrete and down to earth-like – The knowledge you have gained about what machine learning and artificial intelligence is; Does it mean anything in your everyday life? Or is it just something that you talk about on a podcast, and then you put it on the shelf and then that was it?

Rasmus:

For me, understanding more about what artificial intelligence is is actually an important step, also too little understanding of how I deal with the world myself. It is about, among other things, understanding what consciousness is, what it is I myself respond to, by understanding these algorithms as part of what I surround myself with. I may even consist of algorithms, and if there is then some kind of programmed artificial intelligence that understands the way I handle my dealings with the world better than me, then I am in fact being controlled. So is it at all to understand the way we deal with our sensory impressions, our dealings with the world, by understanding what artificial intelligence is? I think this is very important to me, because it may also make me more aware of where I meet this in my everyday life. Now we’ve talked about social media in one place and it might also make me a little bit better at resisting it when I meet it because I’m aware it’s coming.

Thomas:

For me it becomes very concrete applicable on two points. One is, of course, that I am becoming aware that there are some classes of machine learning processes that can actually help me analyze data sets that I otherwise have difficulty analyzing, and that can give me suggestions for causal context. It is operationally valuable as a researcher. So it seemed to me to be a very concrete tool. Now I have not had time to try it yet, but I will try it as a tool and see if it helps me get closer to an understanding of any data set. The other thing that I think has been very concrete in this is that, as I said before, we have been working with this model of consciousness for a while. So what it is that generates our selves, if I may say so, and our consciousness and has always thought that it might actually be a bit like that high-brow and a bit fluffy and maybe a bit like that a bit, yes a bit contrived or so a bit unusable. And then a conversation like this here with Casper, where it becomes quite clear that if such a model actually works, then one can go from having something that is just machine learning to something that might actually become an AI that is able to generate a drive, a will, a right even. And I think it’s not necessarily desirable that we manage to generate it. But I think it’s desirable to know what it takes to generate it. Also to be able to prepare oneself and society mentally for what types of effects it will have. So somehow, it went from being something like thinking just a little high-brow exercise that I could have a little hard time defending and spending a whole lot of time on, to being something where I think. In fact, it might be worth spending time on.

Rasmus:

And then the understanding might so hopefully also help us to have a more qualified conversation with each other.

Thomas:

Yeah, there I think the podcast here might actually be a little helpful because it gives us a little better idea of ​​what machine learning and artificial intelligence are. And that means we can have a slightly more intelligent approach to forming opinions about what machine learning and artificial intelligence are.

Share this podcast.

More podcasts about Abzu.

Discover how Abzu's groundbreaking integration of explainable AI is revolutionizing pharma R&D.

Dr. Andree Bates and Casper Wilstrup discuss how Abzu’s QLattice explainable AI algorithm can be used to accelerate disease understanding, drug design, and insights in Pharma R&D.
Lyt til Casper Wilstrup, CEO i Abzu, der ser, at next step er, at AI går fra et hjælpemiddel til direkte at overtage det menneskelige arbejde.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.