AI and Biosecurity: Patching the Gaps in Digital Defense Against Harmful Proteins
Recent research has revealed both vulnerabilities and solutions in the software that guards against the creation of dangerous proteins using artificial intelligence. Scientists have developed new patches to strengthen biosecurity screening tools, addressing potential loopholes that could be exploited by those with malicious intent.
Around the world, specialized software monitors the synthesis of artificial proteins, serving as a critical safeguard against the production of harmful substances like toxins or viral components. However, researchers have discovered that making slight modifications to known dangerous proteins using AI can sometimes bypass these protective systems. The good news, according to findings published in the October 2 issue of Science, is that reinforcing the screening software can significantly improve its ability to identify these AI-designed proteins before they become a threat.
“AI advances are fueling breakthroughs in biology and medicine,” explains Eric Horvitz, Microsoft’s chief scientific officer based in Redmond, Washington. “Yet with new power comes responsibility for vigilance and thoughtful risk management.” This balance between innovation and safety has become increasingly important as AI tools make protein design more accessible and powerful. Proteins, the fundamental workhorses of biology, perform essential cellular functions from building cell structures to transporting materials throughout the body. While AI can generate digital blueprints for proteins by determining their amino acid sequences, it cannot physically create these proteins without human intervention. The actual production process requires DNA manufacturers to assemble the appropriate genetic code and ship these synthetic genes to research laboratories.
To test the effectiveness of existing biosecurity measures, Horvitz and his colleagues conducted extensive simulations to identify potential weaknesses in screening models. The team generated approximately 76,000 blueprints for 72 harmful proteins, including deadly substances like ricin, botulinum neurotoxin, and proteins that facilitate viral infection in humans. Their findings were concerning: while the biosecurity screens successfully flagged nearly all proteins in their original forms, many AI-modified versions managed to evade detection. Fortunately, the team also discovered that software patches could significantly improve detection rates, even identifying potentially harmful genes after they had been broken into fragments. Even with these improvements, however, about 3 percent of harmful variants still went undetected.
It’s important to note that this research was conducted entirely in digital space—no physical proteins were synthesized in laboratories. Additionally, it remains unclear whether the AI-generated variants would retain their harmful functions if they were actually produced. This highlights both the proactive nature of the research and the current limitations in predicting how digital designs might translate to physical reality.
Despite the concerning possibilities, James Diggans, vice president of policy and biosecurity at Twist Bioscience, a DNA synthesis company based in San Francisco, offers reassurance about the real-world situation. “Close to zero” people have actually attempted to produce malicious proteins, he noted during a news briefing. Unlike cybersecurity threats, which occur constantly, attempts to create harmful biological substances through legitimate channels are exceedingly rare. “These systems are an important bulwark against [threats], but we should all find comfort in the fact that this is not a common scenario,” Diggans emphasized. This perspective provides valuable context, suggesting that while strengthening biosecurity screening is necessary, the immediate risk level remains low.