Cracking the code: Hybrid AI’s logical edge over deep learning.

Deep learning works like half a brain – without the other half, AI cannot meet growing expectations.

This editorial on hybrid AI was first featured in Spiceworks.

Generative AI is great at recognizing patterns, but it cannot explain “the why” behind how it gets to its results. AI must explain the logic used to reach conclusions for mission-critical businesses to trust it, says Casper Wilstrup, founder & CEO of Abzu.

Generative AI is fantastic at quickly and quickly answering complex questions with surprisingly good accuracy. Analyzing millions of data points in a split second, it recognizes patterns in its training data similar to what’s being asked of it and provides an answer.

But how it gets to those answers is impossible for the one asking the question. We have seen instances of popular AI applications like ChatGPT giving false information because it isn’t programmed to provide an answer, and we cannot test it because we have no way of seeing how it got that information. That’s fine if you’re writing your high school graduation speech, but a company building mission-critical applications or developing and testing vital medications cannot rely on its answers.

For example, consider an LLM (large language model) and how it forms sentences by pattern recognition. When writing and forming sentences, most don’t approach this process by thinking, “Ah, because it’s a dependent clause modifying an independent clause, I don’t need a conjunction; I need a comma.” Instead, you know what feels “right” or that it “looks better this way” because you have learned and recognized the pattern. Generative AI works much like our brain in this regard. It recognizes patterns. It doesn’t use the logic of grammar rules; it makes a prediction based on patterns. It has seen sentences like that over and over and over and over, so it “knows” how it should look.

In short, generative AI, or sub-symbolic AI, works like half a brain. Without the other half – deductive reasoning and the ability to explain and justify its conclusions – AI won’t meet exceedingly high expectations from consumers and businesses or regulatory frameworks.

Hybrid AI will overtake generative AI.

OpenAI was a game-changer in showing how accessible and revolutionary AI could be. But now we need more explainability. We need to understand how the AI got to the conclusion it did.

The only way to achieve this is by hybrid application, where the pattern-recognition power of subjective AI is combined with the reasoning and logical understanding of explainable or symbolic AI. That way, decision-making is based on logical reasoning, similar to how a human expert gets to conclusions that can be trusted, but in a fraction of the time.

Hybrid AI takes the best of both worlds. The pattern recognition power of sub-symbolic AI combined with the explainability of symbolic AI pushes development far beyond what it is today. With more critical industries able to trust and audit their AI tools, it will surpass current technology.

How this works in practice.

Let’s take a look at how this can impact mission-critical operations. For example, efficiency is the key element in running a profitable operation in the transportation sector. The industry already uses AI for traffic flow predictions, travel time optimization, and capacity planning. But sub-symbolic AI is always looking backward, only having its historical training data to go on and never adapting to changes in the system state.

What if the cleaner is 15 minutes late getting the train ready? What knock-on effects will that create, and how will those affect the profitability of that train journey? Hybrid AI can figure this out by identifying causational relationships and constantly learning by applying analytical skills to historical and real-world observational data and adapting to changes in the system state.

By following the reasoning made by the system, decision-makers can see which aspects are deemed necessary enough to implement changes, benchmark, fact-check, and critically assess the reasoning behind changes. This means implemented measures are safer and easier to justify, both for the company and the people dependent on them.

Increased accountability and transparency in data and decisions are also interesting to EU lawmakers, with the much-anticipated AI Act expected to be passed soon. AI providers will soon be obligated to comply with “minimal transparency requirements,” users should be made aware of a model’s explainability – “the why” behind the decision-making – when interacting with AI. With hybrid AI, this is already baked into the deal, and people should already have confidence in its models.

The challenges.

Naturally, there are challenges with implementing hybrid AI. What is perceived as novel technology is always harder to jump on board with than what’s already on the market. The sheer abundance of unique algorithms and products that utilize sub-symbolic AI that have come to market in the past year means these solutions feel easy to find and implement. Replacing cost- and time-consuming processes with AI tools has become a norm for many companies, especially those looking to cut costs. As such, it is easy to rely on these rather than looking for new, more complex solutions. AI can do pretty much anything we want it to, so why rock the boat?

The truth is that upcoming regulations can render many of these tools obsolete because some commonly used LLMs can be deemed illegal because they are trained on copyrighted data. Such legislation is up for debate in the EU and U.S. and would mean many, if not most, of the current AI solutions out there would be unable to operate like today. Replacing these tools could take entire new departments and result in significant costs. The alternative is creating new models and algorithms in-house, which would also be a huge and time-consuming endeavor. The risks of sticking with the devil you know are substantial.

Why it matters.

Transportation is only one concrete example. The same logic applies to industries like pharma, healthcare, or other critical parts of society where decisions must be made fast, but they also have to be right. When errors in critical decisions have serious consequences, answers from the magic black box of sub-symbolic AI tools aren’t reliable sources of information. It is, therefore, clear that the limitations of sub-symbolic AI must be taken seriously, and building decision-making processes on these isn’t a sustainable practice.

By combining the pattern-recognition capabilities of sub-symbolic AI with explainability and transparent processes in more sophisticated symbolic AI applications, we take the world’s most influential technical development since the internet to a point where it’s not just fast; it also produces what people expect from it – clear, transparent, and fast answers to some of our most pressing challenges.

Especially in the case of AI, people use what they trust and understand. Businesses that make it easy for their AI users to see how they get their insights and recommendations will win, not only with their AI users but also with regulators and consumers – and with their bottom lines.

Casper is the founder and CEO of Abzu®. He is passionate about the impact of AI and the intersection of AI with philosophy and ethics.

More by Casper:

Addressing in-demand themes in AI innovation and adoption.

Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
In the life sciences sector, many AI applications fall under the high-risk category.
This session provides an in-depth look at several critical aspects of the AI Act from the perspective of developers.

Share some perspective.

Beyond the algorithm: The human impact of AI.

Abzu's perspective on accelerating R&D with explainable AI.

We ditched the traditional swag option for a more personal and green approach: We encouraged Abzoids to bring their preferred textile – preferably pre-loved – to screen-print our logo on.
Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
In the life sciences sector, many AI applications fall under the high-risk category.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.