Could AI Really Kill Off Humans?
NEWS | 29 August 2025
In a popular sci-fi cliché, one day artificial intelligence goes rogue and kills every human, wiping out the species. Could this truly happen? In real-world surveys, AI researchers say that they see human extinction as a plausible outcome of AI development. In 2024 hundreds of these researchers signed a statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Pandemics and nuclear war are real, tangible concerns, more so than AI doom—at least to me, a scientist at the RAND Corporation, where my colleagues and I do all kinds of research on national security issues. RAND might be best known for its role in developing strategies for preventing nuclear catastrophe during the cold war. My co-workers and I take big threats to humanity seriously, so I proposed a project to research AI’s potential to cause human extinction. My team’s hypothesis was this: No scenario can be described in which AI is conclusively an extinction threat to humanity. Humans are simply too adaptable, too plentiful and too dispersed across the planet for AI to wipe us out with any tools hypothetically at its disposal. If we could prove this hypothesis wrong, it would mean that AI might pose a real extinction risk. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Many people are assessing catastrophic hazards related to AI. In the most extreme cases, some people assert that AI will become a superintelligence with a near-certain chance of using novel, advanced tech such as nanotechnology to take over Earth and wipe us out. Forecasters have tried to estimate the likelihood of existential risk from an AI-induced disaster, often predicting there is a 0 to 10 percent chance that AI will cause humanity’s extinction by 2100. We were skeptical of the value of predictions like these for policymaking and risk reduction. Our team consisted of a scientist (me), an engineer and a mathematician. We swallowed our AI skepticism and—in very RAND-like fashion—set about detailing how AI could actually cause human extinction. A simple global catastrophe or societal collapse was not enough for us. We were trying to take the risk of extinction seriously, which meant we were interested only in a complete wipeout of our species. We weren’t trying to find out whether AI would try to kill us; we asked only whether it could succeed in such an attempt. It was a morbid task. We went about it by analyzing exactly how AI might exploit three major threats commonly perceived as existential risks: nuclear war, biological pathogens and climate change. It turns out it will be very hard—though not completely out of the realm of possibility—for AI to get rid of us all. The good news, if I can call it that, is that we don’t think AI could eliminate humans by using nuclear weapons. Even if AI somehow acquired the ability to launch all of the 12,000-plus warheads in the nine-country global nuclear stockpile, the explosions, radioactive fallout and resulting nuclear winter would most likely still fall short of causing an extinction-level event. Humans are far too plentiful and dispersed for the detonations to directly target all of us. AI could detonate weapons over the most fuel-dense areas on the planet and still fail to produce as much ash as the meteor that wiped out the dinosaurs, and there are not enough nuclear warheads in existence to fully irradiate all the planet’s usable agricultural land. In other words, an AI-initiated nuclear Armageddon would be cataclysmic, but it would probably not kill every human being; some people would survive and have the potential to reconstitute the species. We did deem pandemics a plausible extinction threat. Previous natural plagues have been catastrophic, but human societies have soldiered on. Even a minimal population (a few thousand people) could eventually revive the species. A hypothetically 99.99 percent lethal pathogen would leave more than 800,000 humans alive. We determined, however, that a combination of pathogens probably could be designed to achieve nearly 100 percent lethality, and AI could be used to deploy such pathogens in a manner that assured rapid, global reach. The key limitation is that AI would need to somehow infect or otherwise exterminate communities that would inevitably isolate themselves when faced with a species-ending pandemic. Finally, if AI were to accelerate garden-variety anthropogenic climate change, it would not rise to an extinction-level threat. We would seek out new environmental niches in which to survive, even if it involved moving to the planet’s poles. Making Earth completely uninhabitable for humans would require pumping something much more potent than carbon dioxide into the atmosphere. The bad news is that those much more powerful greenhouse gases exist. They can be produced at industrial scales, and they persist in the atmosphere for hundreds or thousands of years. If AI were to evade international monitoring and orchestrate the production of a few hundred megatons of these chemicals (an amount that is less than the mass of plastic that humans produce every year), it would be sufficient to cook Earth to the point where there would be no environmental niche left for humanity. To be clear: None of our AI-initiated extinction scenarios could happen by accident. Each would be immensely challenging to carry out. AI would somehow have to overcome major constraints. In the course of our analysis, we also identified four things that our hypothetical superevil AI would require to wipe out humankind: It would need to somehow set an objective to cause extinction. It also would have to gain control over the key physical systems that create the threat, such as the means to launch nuclear-weapons or the infrastructure for chemical manufacturing. It would need the ability to persuade humans to help and hide its actions long enough for it to succeed. And it would have to be able to carry on without humans around to support it, because even after society started to collapse, follow-up actions would be required to cause full extinction. Our team concluded that if AI did not possess all four of these capabilities, its extinction project would fail. That said, it is plausible that someone could create AI with all these capabilities, perhaps even unintentionally. Developers are already trying to build agentic, or more autonomous, AI, and they’ve observed AI that has the capacity for scheming and deception. But if extinction is a possible outcome of AI development, doesn’t that mean we should follow the precautionary principle and shut it all down because we’re better off safe than sorry? We say the answer is no. The shut-it-down approach makes sense only if people don’t care much about the benefits of AI. For better or worse, people do care a great deal about the benefits it is likely to bring, and we shouldn’t forgo them to avoid a potential but highly uncertain catastrophe, even one as consequential as human extinction. So will AI one day kill us all? It is not absurd to say it could. At the same time, our work shows that we humans don’t need AI’s help to destroy ourselves. One surefire way to lessen extinction risk, whether from AI or some other cause, is to increase our chances of survival by reducing the number of nuclear weapons, restricting globe-heating chemicals and improving pandemic surveillance. It also makes sense to invest in AI-safety research even if you don’t buy the argument that AI is a potential extinction risk. The same responsible AI-development approaches that can mitigate risk from extinction will also mitigate risks from other AI-related harms that are less consequential but more certain to occur. This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
Author: Michael J.D. Vermeer.
Source