This stag doesn’t seem to mind a human extinction.
The scene opens on a sparse, gray landscape, a gnarled tree in the foreground, bits of ash slowly drifting down from the sky. On the horizon, a few huddled figures stumble forward and into a bleak future. If this sounds familiar, it’s because it’s a common visual trope in many post-apocalyptic films. Usually, these films tell the story of a catastrophe — an asteroid strike perhaps, or a nuclear war — that causes humanity’s demise, and then follows the challenges that the remaining humans face as they try to save their species from extinction.
Such films grip the public imagination. But what if human extinction was less a cinematic scenario, and instead, a looming reality? That might seem like a sensational question, but in fact, dozens of researchers around the world spend their days grappling with this very possibility, and how we might avoid it.
Their task isn’t easy. There are multiple theories around what might ultimately cause human extinction — everything from alien invasions to catastrophic asteroid strikes. But among those investigating this question, there’s a general consensus that some risks to human life are more plausible than others. In the field, researchers have a name for these: They call them “existential risks.” What follows here is just a sampling — a few of the risks that researchers have at the top of their minds.
Nuclear war
An existential risk is different to what we might think of as a “regular” hazard or threat, explained Luke Kemp, a research associate at the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom. Kemp studies historical civilizational collapse and the risk posed by climate change in the present day. “A risk in the typical terminology is supposed to be composed of a hazard, a vulnerability and an exposure,” he told Live Science. “You can think about this in terms of an asteroid strike. So the hazard itself is the asteroid. The vulnerability is our inability to stop it from occurring — the lack of an intervention system. And our exposure is the fact that it actually hits the Earth in some way, shape or form.”
Take nuclear war, which history and popular culture have etched onto our minds as one of the biggest potential risks to human survival. Our vulnerability to this threat grows if countries produce highly-enriched uranium, and as political tensions between nations escalate. That vulnerability determines our exposure.
As is the case for all existential risks, there aren’t hard estimates available on how much of Earth’s population a nuclear firestorm might eliminate. But it’s expected that the effects of a large-scale nuclear winter — the period of freezing temperatures and limited food production that would follow a war, caused by a smoky nuclear haze blocking sunlight from reaching the Earth — would be profound. “From most of the modeling I’ve seen, it would be absolutely horrendous. It could lead to the death of large swathes of humanity. But it seems unlikely that it by itself would lead to extinction.” Kemp said.
Pandemics
The misuse of biotechnology is another existential risk that keeps researchers up at night. This is technology that harnesses biology to make new products. One in particular concerns Cassidy Nelson: the abuse of biotechnology to engineer deadly, quick-spreading pathogens. “I worry about a whole range of different pandemic scenarios. But I do think the ones that could be man-made are possibly the greatest threat we could have from biology this century,” she said.
As acting co-lead of the biosecurity team at the Future of Humanity Institute at the University of Oxford in the United Kingdom, Nelson researches biosecurity issues that face humanity, such as new infectious diseases, pandemics and biological weapons. She recognizes that a pathogen that’s been specifically engineered to be as contagious and deadly as possible could be far more damaging than a natural pathogen, potentially dispatching large swathes of Earth’s population in limited time. “Nature is pretty phenomenal at coming up with pathogens through natural selection. It’s terrible when it does. But it doesn’t have this kind of direct ‘intent,'” Nelson explained. “My concern would be if you had a bad actor who intentionally tried to design a pathogen to have as much negative impact as possible, through how contagious it was, and how deadly it was.”
But despite the fear that might create — especially in our currently pandemic-stricken world — she believes that the probability that this would occur is slim. (It’s also worth mentioning that all evidence points to the fact that COVID-19 wasn’t created in a lab.) While the scientific and technological advances are steadily lowering the threshold for people to be able to do this, “that also means that our capabilities for doing something about it are rising gradually,” she said. “That gives me a sense of hope, that if we could actually get on top [of it], that risk balance could go in our favor.” Still, the magnitude of the potential threat keeps researchers’ attention trained on this risk.
From climate change to AI
A tour of the threats to human survival can hardly exclude climate change, a phenomenon that’s already driving the decline and extinction of multiple species across the planet. Could it hurl humanity toward the same fate?
The accompaniments to climate change — food insecurity, water scarcity, and extreme weather events — are set to increasingly threaten human survival, at regional scales. But looking to the future, climate change is also what Kemp described as an “existential risk multiplier” at global scales, meaning that it amplifies other threats to humanity’s survival. “It does appear to have all these relationships to both conflict as well as political change, which just makes the world a much more dangerous place to be.” Imagine: food or water scarcity intensifying international tensions, and triggering nuclear wars with potentially enormous human fatalities.
This way of thinking about extinction highlights the interconnectedness of existential risks. As Kemp hinted before, it’s unlikely that a mass extinction event would result from a single calamity like a nuclear war or pandemic. Rather, history shows us that most civilizational collapses are driven by several interwoven factors. And extinction as we typically imagine it — the rapid annihilation of everyone on Earth — is just one way it could play out.
A catastrophic event might leave only a few hundred or thousand survivors on Earth, which would bring humanity’s viability, as a species, into question. Alternatively, a collapse could wipe out only a segment of humanity, but consequently trigger global insecurity and conflict, reduce our resilience to other threats, setting in motion a more gradual decline. “We’re not talking about a single idea of what an extinction would look like, or how it would unfold. It’s more nuanced than that,” Kemp explained.
There’s another angle to this as well: an existential risk to humanity doesn’t necessarily have to threaten our survival in order to be counted. A risk might be one that curtails our potential as a species — whether that’s our capacity to become a space- faring race, or to reach a certain level of technological dominion. “In some ways, that’s almost as much of a threat to our existence,” said Nelson. In other words, it shatters our idea of humanity’s purpose — which some might argue, is to progress. One prominent risk that fits into this category is artificial intelligence: researchers philosophize that intelligent robots, unintentionally unleashed on the world, might impose widespread surveillance on humans, or outpace us physically and mentally. That would usurp our dominance on the planet, and for many, could fundamentally alter the idea of what it means to be a human.
Humanity itself?
However wide-ranging these risks are, they all have one thing in common: humans play a key role in determining the severity of these risks. So what if humans are their own biggest extinction risk?
RELATED MYSTERIES
—Has the Earth ever been this hot before?
—What if a giant asteroid had not wiped out the dinosaurs?
—Why does artificial intelligence scare us so much?
That’s a focus of Sabin Roman’s research. As a research associate at the Centre for the Study of Existential Risk, he models societal evolution and collapse, looking at past civilizations including the Roman Empire and Easter Island. As Roman sees it, the majority of existential risks are “self-created,” rooted in societies and the systems they produce. In his view, humanity’s attraction to continuous growth leads to exploitation, planetary destruction and conflict. Ironically, that only increases some of the biggest threats we face today, and our vulnerability to them. “A bit too much hinges on perpetual economic growth. If we tried to optimize something else, that would be good!” he said.
He likens our civilization to a line of dominos, where the risk isn’t so much the nudge that starts the cascade — it’s vulnerability to that threat. “[The domino line] is very vulnerable to any perturbation,” Roman said. “If we actually want to change something, there’s very little realistic impact we can have on external factors. It’s more our internal functioning as a society that can change.”
Kemp agrees with this logic: “When people ask me, ‘What’s the biggest existential risk facing humankind?’ I tend to strive for a curveball in response: [poor] international cooperation.” Surreal as it may seem, that’s why studying humanity’s potential demise is a pragmatic pursuit: it can illuminate humanity’s own role in hastening the threat, and its potential to scale it down. Nelson believes that the importance of this challenge means we should be ramping up research on existential threats. “We need more people working on this, and more institutions with more resources to do so.”
So, is that vision in the apocalyptic film the one that awaits humanity? We have no accurate predictions or simple answers about our fate here on Earth. But looking back at collapsed societies, one thing Roman’s sure of is that humans have never been better equipped to protect ourselves. “The thing that’s different with us is we can actually learn from all those past lessons,” Roman said. “The opportunity to learn is enormous.”
Originally published on Live Science.
Sourse: www.livescience.com