From code to pathogen: Are AI viruses on the horizon?

In the span of just a few years, artificial intelligence has gone from novelty to near-necessity in many research labs around the world, helping academics draft grant proposals, design complex proteins and even decode complex diseases. At the same time, synthetic biology has been rewriting life itself. It offers scientists the ability to reprogram living cells with unprecedented precision. And now, these two ‘revolutions’ are converging. The fusion of AI and synthetic biology is opening doors to astonishing new possibilities in medicine, agriculture and beyond, but it’s also raising urgent questions about how far this power should go, and who gets to decide.

And not all doors lead to good places!

As AI gets better at understanding and designing DNA, some experts are raising serious concerns. Could a rogue actor, or even a well-meaning researcher, use AI to cook up new pathogens? Not in a Hollywood, sci-fi kind of way, but in a very real-world, ‘we-have-the-tools-now’ sort of way. A handful of papers have already suggested how shockingly easy it might be to prompt an AI model trained on chemistry to spit out the structure of a known toxin… or perhaps worse, a new one.

AI models, especially large language models (LLMs), have demonstrated remarkable capabilities in helping to design biological sequences and model protein structures.  Studies under review suggest that AI can generate novel toxic proteins and small molecules, some resembling known toxins like ricin and diphtheria toxins. This raises concerns about the ease with which harmful agents could be designed.

The accessibility of AI tools means that individuals with limited technical expertise might exploit these technologies to design dangerous pathogens. Indeed, Nobel laureate Geoffrey Hinton – the so-called ‘Godfather of AI’ – recently mentioned that AI could provide detailed instructions for assembling viruses… I thought, “Oh, well, I’m not surprised”, assuming he was talking about computer viruses (ransomware, worms, Trojans, etc.). But then I realised he was referring to real viruses that infect organisms – as Peter Medawar said, those pieces of “bad news wrapped up in a protein” – a characterisation that I disagree with, as viruses can be hugely beneficial, too, but you get the drift. This could be disastrous.

AI virus image. Blog by Jake Robinson - author of Invisible Friends, Treewilding and the Nature of Pandemics

How do we mitigate these risks? Comprehensive oversight of AI applications in biotechnology is vital. This might include implementing stringent screening processes for synthetic DNA orders, developing AI models with built-in safety constraints and establishing international ethical guidelines and regulatory frameworks.

Balancing innovation with security becomes imperative. Proactive governance and responsible development are essential to harness the benefits of AI-driven synthetic biology while safeguarding against potential threats.

We’ve taught machines to read DNA. What happens as they get better at writing it? Could the next disease outbreak come from code?

Check out my next book on The Nature of Pandemics - out in October! https://pelagicpublishing.com/products/the-nature-of-pandemics

Next
Next

Two spillovers in one week: What Australia’s bat viruses are trying to tell us