To Forecast AI's Impact on Biosecurity, We Asked: Why are Attacks So Rare?
Nine factors that practitioners say make bioweapons rare.
When people discuss potential risks from advanced AI, the list usually includes bioweapons. The idea is that a rogue actor, or possibly a misaligned AI, might use AI tools to help synthesize and release anything from anthrax to a civilization-ending supervirus. To shed more light on this risk, Golden Gate Institute for AI’s Abi Olvera interviewed biosecurity professionals with decades of hands-on experience in laboratories. What she learned was more reassuring than headlines suggest: bioweapons are genuinely difficult to make, and near-term developments in AI may not change that as much as you might think. This is part one of her four-part series explaining why; the remaining installments will appear here at Second Thoughts in the coming weeks.
Bioweapons are rarely used.1
One reason is that they’re hard to make. Another reason is that they’re bad weapons.
You can’t time them – a virus spreads on its own schedule. You can’t aim them – a virus spreads uncontrollably. Protecting your own people from a virus requires tipping your hand – a vaccination program is hard to hide.
People who want to cause harm still follow a cost-benefit logic. They try to pick the tool most likely to achieve their goal. For almost any real-world objective, that tool is a bomb, a gun, a chemical weapon, or a cyberattack. These are cheaper, faster to create, easier to deploy, more controllable, and more predictable than a bioweapon.
This cost-benefit logic is well-studied in military strategy, but less well recognized in current discussions of bioweapon risks from AI. That gap is important. AI risk researchers and biosecurity practitioners are both worried about bioweapons, but they’re working from different starting points. AI researchers focus on ways AI could help with bioweapon construction. Biosecurity practitioners focus on the most critical limiting factors for bioweapon creation.
This series focuses on the practitioner’s view. It draws on conversations with biosecurity professionals with decades of hands-on laboratory experience. The series’ four essays cover four questions: Why are bioweapons rare? How much laboratory skill do the necessary processes actually require? Where in the production chain does AI help and where doesn’t it? Why does the biosecurity discourse underplay the factors that make successful bioweapons so rare?
These are the factors that practitioners say make bioweapons rare:
Additional notes regarding “Access”: the estimate of how many people have access to facilities is back-of-the-envelope.2 Additionally, lab work is not the only route to pathogens.3
AI does lower some barriers. Large language models can make some steps of bioweapon creation easier and cheaper, such as deciphering research on how to culture cells or disperse pathogens. AI biodesign tools reduce the expertise required to design modifications to a virus’s genome.4 AI can also help with steps that have nothing to do with biology, such as knowing which international shipping routes to use when buying supplies to avoid regulatory oversight and customs inspections. It can help actors improve their planning and coordination skills.
However, all the operational and planning capacities provided by AI are also making it easier to launch bombs, chemical weapons, and cyberattacks. AI doesn’t yet significantly change the calculus for someone wanting to cause harm.
This doesn’t mean we should be complacent. New technologies can shift the math.
Pathogens targeting genetic traits like ancestry or sex could, if ever feasible, make bioweapons more attractive to certain actors. New kinds of actors could emerge. For instance, people deeply destabilized by large economic shifts could mean a larger supply of disgruntled people interested in bioweapons. Robotics and automated laboratories could reduce the level of competence required for success.
Accurately assessing these future risks depends on a clear understanding of the present. The people who understand these risks best aren’t the ones writing about them publicly. Most of them work in laboratories, government agencies, or the national security world. Advocacy and policy organizations fill that gap, though their incentives push them toward focusing on worst-case scenarios. That’s a big reason public discourse treats AI-enabled bioweapons as more imminent and accessible than practitioners do.
The next installment in this four-part series examines tacit knowledge: the gap between written protocols and real-world lab work, and how that limits AI’s potential to actually help someone build a bioweapon.
Thanks to Steve Newman, Taren Stinebrickner-Kauffman, Mike Montague, Matt Sharkey, Gigi Gronvall, and David Manheim for suggestions and feedback.
Since the 1980s, only one fatal bioterrorist attack has occurred: the 2001 anthrax letters. Other incidents, such as a Salmonella attack to influence a local election, a medfly release targeting California crops, and two ricin letter attempts, caused no deaths.
Two experts estimated how many Americans have the resources to misuse viral genomes, with estimates ranging from tens of thousands to under a hundred once expertise is factored in. We use the higher figure as a rough order-of-magnitude estimate.
Matt Sharkey (RAND) started with U.S. academic biology departments—according to Bureau of Labor Statistics and National Science Foundation data, the population under consideration is roughly 100,000 people (~40k life sciences faculty plus ~60k biological or biomedical sciences doctoral students, with ~45k Master’s students not counted, as many Master’s programs do not involve benchwork). His assumptions include that about 75% of these people can perform molecular cloning, but only about 20-30% of them have access to the cell culture equipment required for something like influenza. Applying those filters yields 20,000-30,000 technically capable people. Since only about a third would independently work out the reverse genetics approach from published protocols, the academic estimate drops to 6,000-10,000. Adding the private sector could push that 1.5x-3x higher.
Michael Montague (Archimedes Network and Quantum Biology Institute) started broader (400,000–700,000 people, including industry PhDs and technicians) but applied three additional cuts: only 10% have “boot-up” knowledge to activate an assembled genome; only 1 in 10 labs has the right equipment; and 1 in 100 people have unsupervised, long-term lab access to run a multi-hundred-day project undetected. He estimated that 1000 people in academia have access, though boot-up difficulties reduced his estimate to 40 to 70 people.
Disagreement between the experts centered on how difficult boot-up is and whether nighttime lab use would go unnoticed.
Instead of making bioweapons in a lab, attackers could try to deliberately get infected by visiting places where people already have dangerous diseases like tuberculosis or Ebola. However, this requires finding actively sick people, getting close enough to catch the disease, and perfect timing around contagiousness windows. Attackers would also be stuck with natural transmission rates.
Bioweapons based on engineered pathogens remain unproven. Their feasibility is unclear. Novel bioweapons will be discussed in the third essay.



Glad you're looking at this, Abi. This always seemed to be a strange fear for people to be constantly mentioning. It presupposes that the pre-AI bottleneck for mass bioweapon deployment was the molecular design, which just never passed the smell test.