Explainable AI: Designing better molecules to become drugs.

Oligonucleotides Therapeutics Society (OTS) 2023 poster and keynote.

Disa Tehler, Helena Britze, Martin Kerr, and Lykke Pedersen.

OTS poster:

Background and key takeaways.

With explainable AI in the form of symbolic regression, we can get an improved understanding of what drives drug properties in conjunction with accurate drug property predictions. Here we use the QLattice to generate siRNA activity models from publicly available data to create insights that can be used to design active siRNAs. Importantly, the workflow presented here can also be used to predict any desired outcome such as toxicity, duration or even be used for disease understanding and target identification.
  1. We generate Model A: a highly accurate activity model for unmodified siRNAs.
    • The model reveals that there are specific dimers at the first and last positions of the duplex that are more or less beneficial.­
    • However, an siRNA with unfavourable dimers at named positions can still be active with the right target binding energy (whole ΔG).
    • Nevertheless, an siRNA with the worst possible first and last dimers cannot be saved by an optimal binding energy.
  2. Interestingly, fully chemically modified siRNAs do not fit model A indicating that other features drive the activity for these.
  3. Model B is therefore generated using the same input features as used for Model A but trained on data from chemically modified siRNAs. It is showing improved predictability for modified siRNAs.
  4. In general Model A has a poor performance on modified siRNAs and Model B has a poor performance on unmodified siRNAs suggesting different main drivers of activity for these two classes of siRNAs.

OTS poster:

Explainable AI: Designing better molecules to become drugs.

Understanding what drives drug properties — in conjunction with accurate drug activity predictions — is a clear advantage to improving drug candidate hit rates.

Download our poster from the 19th Annual Meeting of the Oligonucleotide Therapeutics Society 2023.

We combine explainable AI methodologies to support scientists to quickly build and test new hypotheses with sufficient evidence and accuracy for critical decision making.

Download our poster:

Explainable AI: Designing better molecules to become drugs.​

Entering your information does not subscribe you to marketing emails from Abzu. We just worked really hard on this research, and we want to know more about you.

Try the QLattice.

Experience the future of AI, where accuracy meets simplicity and explainability.

Models developed by the QLattice have unparalleled accuracy, even with very little data, and are uniquely simple to understand.

The QLattice

Share this publication.

The QLattice accelerates discoveries with explainable insights.​

Researchers and and scientists cite Abzu’s QLattice symbolic AI in industry-leading journals for introducing a new standard of performance and explainability to data sets.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.