Bird Flu Leaves the World With an Existential Choice

To prepare for future outbreaks, we’ll have to decide which is the greater danger: nature or ourselves.

Adjacent images of a chicken and a coronavirus particle
The Atlantic; Getty

After three bleak years, the coronavirus pandemic is finally drawing to a close, but pandemics as a general threat very much are not. At the moment, the most pressing concern is H5N1, better known as bird flu. Public-health experts have worried for decades about the virus’s potential to spark a pandemic, and the current strain has been devastating global bird populations—not to mention spilling over into assorted mammalian hosts—for more than a year. But those worries became even more urgent in mid-October, when an outbreak of the virus on a Spanish mink farm seemed to show that the mink were not only contracting the disease but transmitting it.

Occasional spillover from birds to mammals is one thing; transmission among mammals—especially those whose respiratory tracts resemble humans’ as closely as mink’s do—is another. “I’m actually quite concerned about it,” Richard Webby, an influenza expert at St. Jude Children’s Research Hospital, told me. “The situation we’re in with H5 now, we have not been in before, in terms of how widespread it is, in terms of the different hosts it’s infecting.”

What’s hard to gauge at this point, Webby said, is the threat this spread poses to humans. That will depend on whether we can transmit the virus to one another and, if so, how efficiently. So far the risk is low, but that could change as the virus continues to spread and mutate. And the kind of research that some scientists say could allow us to get a handle on the danger is, to put it mildly, fraught. Those scientists think it’s essential to protect us from future pandemics. Others think it risks nothing less than the complete annihilation of humanity. Which way you feel—and how you think the world should approach pandemic preparedness as a whole—comes down to which type of pandemic you think we should be worried about.


The point of the research in question is to better understand pandemic threats by enhancing deadly viruses. This is done in a highly controlled laboratory setting, with the aim of better preparing for these pathogens out in the world. This type of science is often referred to as gain-of-function research. If that term sounds familiar, that’s probably because it has been invoked constantly in debates about the origins of COVID-19, to the point that it has become hopelessly politicized. On the right, gain-of-function is now a dirty term, inextricable from suspicions that the pandemic began with a lab leak at China’s Wuhan Institute of Virology, where researchers were doing experiments of just this sort. (Solid evidence points toward the competing theory: that the coronavirus jumped to people from animals. But a lab leak hasn’t totally been ruled out.)

As a category, gain-of-function actually encompasses a far broader range of research. Any experiment that genetically alters an organism so that it does something it didn’t before—that is, gains a function—is technically gain-of-function research. All sorts of experiments, including many of those used to produce antibiotics and other drugs, fall into this category. But the issue people fret about is the enhancement of pathogens that, if released into the world, could conceivably kill millions of people. When you put it that way, it’s not hard to see why this work makes some people uncomfortable.

Despite the inherent risks, some virologists told me, this sort of research is crucial to preventing future pandemics. “If we know our enemy, we can prepare defenses,” the Emory University virologist Anice Lowen told me. The research enables us to pinpoint the specific molecular changes that allow a virus capable of spreading among animals to spread among humans; our viral surveillance efforts can then be targeted to those adaptations in the wild. We can get ahead on developing countermeasures, such as vaccines and antivirals, and ascertain in advance how a virus might evolve to circumvent those defenses.

This is not merely hypothetical: In the context of the bird-flu outbreak, Lowen said, we could perform gain-of-function experiments to establish whether the adaptations that have allowed the virus to spread among mink enhance its ability to infect human cells. In fact, back in 2011, two scientists separately undertook just this sort of research, adapting H5N1 to spread among ferrets, whose respiratory tract closely resembles our own. The research demonstrated that bird flu could not only spill over into mammalian hosts but, under the proper circumstances, pass between mammalian hosts—just as it now seems to be doing—and perhaps even human ones.

The backlash to this research was swift and furious. Critics—and there were many—charged that such experiments were as likely to start a pandemic as to prevent one. Top flu researchers put a voluntary moratorium on their work, and the National Institutes of Health later enacted a funding moratorium of its own. With the imposition of stricter oversight regimens, both eventually lifted and the furor subsided, but researchers did not race to follow up on the two initial studies. Those follow-ups could have given us a better sense of the likelihood that H5N1 might develop the ability to transmit between humans, Angela Rasmussen, a virologist at the Vaccine and Infectious Disease Organization in Saskatchewan, Canada, told me. “That moratorium really did have a big chilling effect,” she said. “We haven’t really been looking at the determinants of mammal-to-mammal transmission, and that’s the thing that we really need in order to understand what the risk is to the human population.”


The controversy over the origins of the coronavirus pandemic has renewed calls for gain-of-function bans—or at least additional oversight. Last month, the National Science Advisory Board for Biosecurity delivered a set of recommendations that, if adopted, would tighten regulations on all sorts of virology research. Piled on top of more widespread anti-science rhetoric throughout the pandemic, the recommendations have contributed to a sense of embattlement among virologists. The advisory board is “responding to a lot of hyperbole without any real evidence,” Seema Lakdawala, a flu-transmission expert at Emory University, told me.

But other researchers—those more on the “fear the annihilation of humanity” side of the aisle—see the recommendations as progress, important if not anywhere near sufficient. “Laboratory accidents happen,” Richard Ebright, a molecular biologist at Rutgers University who earlier this month co-founded an anti-gain-of-function biosecurity nonprofit, told me. “They actually happen on a remarkably frequent basis.” The 1977 flu pandemic, which killed roughly 700,000 people, may well have started in a laboratory. Anthrax, smallpox, and other influenza strains have all leaked, sometimes with deadly consequences. The original SARS virus has done so too—several times, in fact, since its natural emergence in 2003. And we might never know for sure how the coronavirus pandemic began. (This lack of definitive evidence has not stopped Ebright from tweeting that NIH officials potentially share the blame for the millions of deaths COVID-19 has caused worldwide.)

To researchers in this camp, the benefits of experimenting with such dangerous pathogens simply do not justify the risks. Sure, identifying mutations that could lead to transmissibility among humans might be marginally beneficial to our surveillance efforts, Kevin Esvelt, an evolutionary biologist at MIT, told me, but viruses can make that jump via innumerable genetic pathways, and the odds of our hitting on one that’s ultimately relevant are tiny. In the case of bird flu, researchers told me, the mutations that appear to have made the virus transmissible among mink are not the mutations identified in the 2011 studies. This sort of research, Ebright said, “has existential risk, but effectively no benefit or extremely marginal benefit.”

Huge numbers of lives are at stake. The world has just lost millions to the coronavirus. Imagine the same again, or even much worse, due to our own folly. Even if we managed to swiftly contain an outbreak, Esvelt worries, the mere perception of a lab leak could irreparably damage the public trust and lead people to question far safer pandemic precautions. More doubts, more deaths. “‘Double or nothing’ doesn’t even begin to describe it,” Esvelt told me. “I’m just not comfortable risking so much of the biomedical enterprise on that toss of the dice.”

But a lab leak isn’t Esvelt’s main concern. What really worries him is bioterrorism. Imagine that researchers identify a pandemic-capable virus and share its genome sequence, as well as all the information necessary to replicate it, with the scientific community. Other scientists begin developing countermeasures, but now hundreds or even thousands of people have the ability to make something with the potential to kill millions; it’s not exactly a well-kept secret. Someone goes rogue, replicates the virus, releases it in an international-airport terminal, and, just like that, you’ve got a pandemic. Esvelt compares the danger to that of nuclear proliferation; in his view, nothing else comes close in sheer destructive capacity.

Part of what’s so tricky about this whole debate is that, unlike most competing public-health priorities, the question of whether to conduct this research is inherently zero-sum, in the sense that we can’t have it both ways. If we do the research and publish the results, we might improve our chances of preventing natural pandemics, but we necessarily create a security risk. If we don’t do the research, we don’t create the security risk, but we also don’t reap the preventive benefits. We have to choose.

To do so with any degree of certainty, Esvelt told me, we’d need to put hard numbers to the risks and benefits of bringing a dangerous pathogen into a lab. So far, he said, few people have done that math. Plenty of researchers, he thinks, may not even realize the trade-off exists: “If you’re the kind of person that has dedicated your life to working in comparative obscurity on a shoestring budget, trying to prevent a calamity with no real hope of reward or recognition, the very possibility that humans could be malevolent enough to deliberately cause the catastrophe that you fear—honestly, I think they’re such good people that it just never occurs to them.”

The gain-of-function proponents I spoke with were by no means naive to the threat of bioterrorism. “It’s both presumptuous and patronizing to suggest that biosecurity risks never occur to virologists,” Rasmussen said. She pointed out that lab members undergo background checks and receive regular training on mitigating security risks (blackmail, extortion, etc.). Excessive oversight poses its own threat, her camp argues. The risk, Lowen told me, is that we “lose the war against infectious diseases by winning a battle on research safety.” “There is a great threat out there from nature,” she said. “Relative to that, the threat from laboratory accidents is small.”

Regardless of who’s right, Esvelt’s broader point is a good one: How you feel about this research—and which kind of pandemic worries you most—is, in the end, a question of how you feel about human nature and nature nature and the relation between the two. Which should we fear more—our world or ourselves?

Jacob Stern is a contributing writer at The Atlantic.