Microsoft researchers have discovered that generative AI can exploit “zero day” vulnerabilities in biosecurity systems designed to prevent misuse of DNA synthesis technology. The findings, published in Science, demonstrate how AI protein design tools can create potentially harmful molecules that evade existing safety controls.

The research team, led by Microsoft’s chief scientist Eric Horvitz, used AI algorithms to redesign known toxins in ways that bypass biosecurity screening software whilst potentially retaining their dangerous properties. The work remained entirely digital, with researchers alerting authorities and vendors before publication.

The Dual-Use Technology Challenge

AI capabilities fuelling pharmaceutical innovation can also generate harmful molecular designs. Microsoft’s “red-teaming” exercise revealed that adversarial AI protein design could help bad actors manufacture toxic proteins by disguising DNA sequences from commercial screening systems.

Current biosecurity controls require vendors to screen orders against databases of known toxins. Close matches trigger alerts. However, Microsoft demonstrated AI could redesign sequences to slip past detection—the biological equivalent of malware evading antivirus software.

Incomplete Patches and Ongoing Arms Race

Whilst Microsoft notified US authorities and screening software manufacturers ahead of publication, allowing vendors to update their systems, the patches remain incomplete. Some AI-designed molecules still escape detection, and the state of the art continues evolving rapidly.

“This isn’t a one-and-done thing. It’s the start of even more testing,” says Adam Clore, director of technology R&D at Integrated DNA Technologies and study coauthor. “We’re in something of an arms race.”

This admission underscores a fundamental challenge with AI safety in high-stakes domains: defensive measures struggle to keep pace with offensive capabilities, particularly when the same tools accelerating legitimate research also enable potential misuse.

The Guardrails Dilemma

Microsoft’s research highlights critical questions about where protective controls should reside in AI systems. UC Berkeley AI safety researcher Michael Cohen argues that biosecurity screening at DNA synthesis represents an increasingly unreliable “choke point,” suggesting safety measures should be built into AI models themselves.

However, this approach faces challenges. As Clore notes, “You can’t put that genie back in the bottle. If you have resources to trick us into making a DNA sequence, you can probably train a large language model.” The decentralised nature of AI development means controlling model capabilities proves far more difficult than monitoring DNA synthesis.

Dean Ball, a fellow at the Foundation for American Innovation, emphasises the findings demonstrate “the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with reliable enforcement.” President Trump’s May 2025 executive order on biological research safety called for overhauling DNA screening systems, though specific recommendations remain pending.

For UK SMEs, the broader lesson extends beyond biosecurity. Any AI deployment involving potential harm requires consideration of where guardrails should operate and how to maintain their effectiveness as capabilities advance. Microsoft’s approach of digital-only testing and coordinated disclosure represents responsible practice, yet incomplete patches illustrate why AI safety requires ongoing vigilance rather than one-time implementation.

The Path Forward

The research suggests AI systems in sensitive domains require multiple complementary safeguards: technical controls, human oversight, monitoring systems, and rapid-response capabilities for emerging vulnerabilities. No single layer provides complete protection as AI capabilities advance.

For British businesses implementing AI where errors carry significant consequences, Microsoft’s findings serve as a reminder that security measures designed for previous technologies may prove inadequate against AI-enabled threats. The question isn’t whether to deploy guardrails, but where to position them and what human expertise remains essential.

As researchers noted, this represents “the start of even more testing” rather than a definitive solution. UK SMEs should apply that principle to their own AI deployments—treating safety and security as ongoing processes requiring continuous attention rather than one-time implementations.

Source: Microsoft says AI can create “zero day” threats in biology - MIT Technology Review, 2 October 2025

Share this article