Book Review: The Precipice

Existential risk is typically a hard topic to reason about. Toby Ord's work makes it much more tractable and highlights "why now" very clearly.

Book Review: The Precipice
We are in a centuries-long “Cuban Missile Crisis” and may be encountering the Great Filter in our lifetimes

The Empty Universe & The Great Filter

Just where is everyone else?  The Fermi Paradox is one of the more profound questions we have and the Drake Equation gives us some grasp on its enormity.  Briefly, with the immense scale of our universe, the question is why have we not detected any signs of extraterrestrial life?  The Drake equation’s biggest unknown is the term L, which is the amount of time a civilization can send signals into space.  Every other part of the equation suggests there should be a lot of these civilizations.

Instead, space seems devoid of life.  Or at least of signal.  One possible resolution to the Fermi Paradox is the Great Filter – some event effectively makes L above very short.  The two possibilities of the filter are that it’s a) in the past of b) in the future.  Given the number of mass extinctions and humanity’s very short history, it’s very hard to say which one.  The optimistic argument is that it’s behind us – maybe sentience and language are actually the great filter.  The pessimistic argument is that it’s ahead of us: god-like technology with medieval institutions and paleolithic emotions.  Nick Bostrom’s thought experiment about the “urn of invention” with a potential “black ball” is enough that we can’t reduce this chance to 0.

If the Great Filter lies ahead of us, we must confront the question: what existential risks does humanity face and what can we do about them? 

Existential Risks

Toby Ord’s The Precipice attempts to answer this question.  Slate Star Codex (SSC) has a fantastic review of this book here.  Ord argues that we’re in the middle of the Great Filter right now.  He dates the start of this era to “to 11:29 a.m. (UTC) on July 16, 1945: the precise moment of the Trinity test.”  I think he is correct.  Per SSC:

Ord speculates that far-future historians will remember the entire 1900s and 2000s as a sort of centuries-long Cuban Missile Crisis, a crunch time when the world was unusually vulnerable and everyone had to take exactly the right actions to make it through.

We have enough nuclear weapons to destroy ourselves, the capability of engineering absolutely terrifying biological weapons, and institutions that don’t give confidence that we’ll manage these risks well.  So I am very sympathetic to the idea that we’re at an elevated civilizational risk and am glad that Ord spent the time to explain, catalog, and estimate the chance of each of these calamities.  

Ord does this through the lens of existential risk – something that permanently removes the potential of humanity.  Something catastrophic, like a pandemic that wipes out 95% of humanity is clearly bad, but not the thing that Ord is specifically trying to calculate.  The reason we are concerned about this is in part that we ought to value all life, including future generations.  If you do this with any reasonable discount function, you should value those future possible lives and be willing to invest significantly today to do so.

If you’re convinced existential risk is bad, the next question is what are the existential risks themselves?  A large part of Ord’s work is to document these risks and share how he estimates the chance of extinction.  As an aside, his work is one of the best examples of data science I’ve seen–the book is worth reading for that alone, even if you find the overarching argument unpersuasive.  SSC concurs:

I am really impressed with the care he puts into every argument in the book, and happy to accept his statistics at face value. People with no interest in x-risk may enjoy reading this book purely as an example of statistical reasoning done with beautiful lucidity.

To skip about a hundred pages or so of this reasoning, here is the list of risks that Ord produces this table of the chance of existential risk in the next 100 years:

  • Asteroid or comet impact: ~ 1 in 1,000,000  
  • Supervolcanic eruption: ~ 1 in 10,000  
  • Stellar explosion: ~ 1 in 1,000,000,000  
  • Total natural risk: ~ 1 in 10,000  
  • Nuclear war: ~ 1 in 1,000  
  • Climate change: ~ 1 in 1,000  
  • Other environmental damage: ~ 1 in 1,000  
  • “Naturally” arising pandemics: ~ 1 in 10,000  
  • Engineered pandemics: ~ 1 in 30  
  • Unaligned artificial intelligence: ~ 1 in 10  
  • Unforeseen anthropogenic risks: ~ 1 in 30  
  • Other anthropogenic risks: ~ 1 in 50 
  • Total anthropogenic risk: ~ 1 in 6 
  • Total existential risk: ~ 1 in 6

Note: Since this book was written in (early) 2020, I would argue that the “nuclear war chance” has increased

If any of those numbers seem odd or unintuitive to you, I strongly encourage you to read the book and understand where it came from–it is again one of the best examples I’ve seen of statistical reasoning.  For example, climate change at 1 in 1000?  Ord acknowledges that we are changing the climate and it will cause a lot of harm, but is less likely to be existential unless we somehow trigger a runaway process that turns our planet into Venus.  He lists plausible mechanisms for this.  The “unforeseen risk” is a nod to Bostrom’s thought experiment about the urn.

The most controversial is the unaligned artificial intelligence chance, which we’ll dive into.  Before we get to the biggest upcoming risk, I do want to spend a bit of time thinking about how well humanity has handled the biggest past risk: nuclear weapons.  This should give you the appropriate baseline that we can handle the next risk.

Nuclear Risk and Nuclear Weapons

Viewers of the movie Oppenheimer may remember some discussion of the concept of “atmospheric ignition.”  This was a small risk before the trinity test that we did not have the capabilities to fully understand.  SSC sums this up well:

And even when people seem to care about distant risks, it can feel like a half-hearted effort. During a Berkeley meeting of the Manhattan Project, Edward Teller brought up the basic idea behind the hydrogen bomb. You would use a nuclear bomb to ignite a self-sustaining fusion reaction in some other substance, which would produce a bigger explosion than the nuke itself. The scientists got to work figuring out what substances could support such reactions, and found that they couldn’t rule out nitrogen-14. The air is 79% nitrogen-14. If a nuclear bomb produced nitrogen-14 fusion, it would ignite the atmosphere and turn the Earth into a miniature sun, killing everyone. They hurriedly convened a task force to work on the problem, and it reported back that neither nitrogen-14 nor a second candidate isotope, lithium-7, could support a self-sustaining fusion reaction.

One of those two calculations was wrong!  Fortuitously, it was lithium-7, meaning that atmospheric ignition did not happen (although this mistake meant that the first hydrogen bomb test had triple the expected yield).  We didn’t seem to do the science that carefully from an existential risk perspective.

As documents become declassified over time, it’s becoming increasingly clear we didn’t seem to do the politics of managing nuclear weapons well either.  Ord catalogs several instances where the Cold War may have accidentally become a nuclear war.  I add others I’ve come across in my reading:

  • The Cuban Missile Crisis – “unbeknownst to the US military leadership, a conventional attack on Cuba was likely to be met with a nuclear strike on American forces”
  • Training Tape Incident – “November 9, 1979 At 3 a.m. a large number of incoming missiles—a full-scale Soviet first strike—appeared on the screens at four US command centers. The US had only minutes to determine a response before the bulk of their own missiles would be destroyed.… The screens had been showing a realistic simulation of a Soviet attack from a military exercise that had mistakenly been sent to the live computer system.”
  • Autumn Equinox Incident – “September 26, 1983 Shortly after midnight, in a period of heightened tensions, the screens at the command bunker for the Soviet satellite-based early-warning system showed five ICBMs launching from the United States. The duty officer, Stanislav Petrov, had instructions to report any detected launch to his superiors, who had a policy of immediate nuclear retaliatory strike. For five tense minutes he considered the case, then despite his remaining uncertainty, reported it to his commanders as a false alarm.”
  • Able Archer 83 – right after the Autumn Equinox Incident there was a NATO exercise that Soviet leadership was convinced was cover for the real thing.  Many cite this event as when the world came closest to nuclear war.  Even closer than the Cuban Missile Crisis.
  • Norwegian Rocket Incident – a Norwegian satellite to study the northern lights put Russia on high alert that a single nuclear rocket was incoming.  Norway even notified Russia, but this information did not traverse the bureaucracy sufficiently to avoid the alert.

The close calls of the Cold War highlight the difficulty of managing existential risk, even when the stakes and mechanism are clear. As we look to the future, the potential risk from advanced artificial intelligence presents an even more complex challenge.  According to Ord's estimates, the risk of existential catastrophe from unaligned AI (10%) is the single biggest threat.

Unaligned AI

Ord’s prediction of the risk is a lot less scientific–how could it be otherwise?   Ord acknowledges this:

The case for existential risk from AI is clearly speculative. Indeed, it is the most speculative case for a major risk in this book. Yet a speculative case that there is a large risk can be more important than a robust case for a very low-probability risk, such as that posed by asteroids. What we need are ways to judge just how speculative it really is, and a very useful starting point is to hear what those working in the field think about this risk.

He reasons from general plausible scenarios and refuses to lower the risk below 10%, anchored by what many of the leading researchers in the field say.  A field where many top researchers are worried about the harm to humanity should raise your prior that it is at least possible as well.  Generally the argument follows from the concept of building artificial general intelligence (AGI) and imbuing it with goals carries risk.  Personally I think the biggest sub-risk here is actually in the dystopia camp rather than rogue AI camp.  Ord explains how to think about this class of outcomes:

We can divide the unrecoverable dystopias we might face into three types, on the basis of whether they are desired by the people who live in them. There are possibilities where the people don’t want that world, yet the structure of society makes it almost impossible for them to coordinate to change it. There are possibilities where the people do want that world, yet they are misguided and the world falls far short of what they could have achieved. And in between there are possibilities where only a small group wants that world but enforces it against the wishes of the rest. 

The last possibility is the most salient.  My reading of history suggests to me that there is no shortage of aspiring dictators who had absolute political power but lacked the means to impose their will.  AGI may make that possible.  When I think in systems, it’s usually not the will but the capabilities that have limited past tyrants and dictators (especially the infamous ones in the 20th century).  Intelligence without motivation is a scary capability–in the past intelligence often came with scruples, feedback enough to undermine totalitarian systems.

For a much longer treatment about how to think about building aligned AI, I recommend the book Human Compatible.  If there was one quote to summarize that book, I would pick this one:

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

The complexities and uncertainties surrounding AI alignment merit much deeper analysis than is possible here. This challenge looms large on the horizon as we consider the array of risks facing our species. Looking back over the long arc of Earth's history, we see a pattern of recurrent mass extinctions, reminding us of the fragility of life. In our present Anthropocene era, we must grapple with the new risks introduced by our own unprecedented power to shape the trajectory of life itself.

Conclusion

Though we have a thin fossil record, it’s clear that our planet has had multiple phases of mass extinctions since terrestrial life came to exist.  It’s also possible there were more, but we just don’t have concrete evidence.  Ord lists the 5 mass extinctions here (and you could argue humanity is causing a sixth now):

  • Late Ordovician Date: 443 Ma Species Lost: 86% 
  • Late Devonian Date: 359 Ma Species Lost: 75% 
  • End-Permian Date: 252 Ma Species Lost: 96% 
  • End-Triassic Date: 201 Ma Species Lost: 80% 
  • End-Cretaceous Date: 66 Ma Species Lost: 76%

That said, I think humanity has also become more resilient to many types of risk.  We can adapt to changing weather patterns.  We can generate electricity with a wide number of mechanisms, allowing us to live in an incredibly wide range of climates.  There is a bit of context I want to put around past trends–they may not matter that much.  The one graph that summarizes this best to me:

Back when we were referring to the potential “runaway” processes that could cause global warming to be existential.  There is a “runaway” process on the thousand-or-million-year scale of growth.  Of our ability to transform our environment and shape it to our will.  A reading of The Coming Wave suggests that this will only speed up–we’re learning to augment intelligence and biology.  There is of course a risk it can go all wrong and that now is the time to be more vigilant.

Thinking back to explaining the Great Filter, I believe the mechanism lies in dopamine.  The reward circuit that causes us to strive for more and tells us not to be complacent is the one that contains both the source of our progress and the seeds of the risk.  Developmentally, humans are wired to seek a new identity and separate from their parents.  Societally we are very interested in change, often for its own sake.  Combine this with the ability to pass learning on the “software layer” and you get humanity.

In my view, this very dynamic – we are here creating intelligence because of dopamine and at the same time at elevated risk – is why managing this risk is one of the most important things we can do.

Ord argues our responsibility could be universal in scope: 

Martin Rees, Max Tegmark and Carl Sagan have reasoned that if we are alone, our survival and our actions might take on a cosmic significance. While we are certainly smaller than the galaxies and stars, less spectacular than supernovae or black holes, we may yet be one of the most rare and precious parts of the cosmos. The nature of such significance would depend on the ways in which we are unique. If we are the only moral agents that will ever arise in our universe—the only beings capable of making choices on the grounds of what is right and wrong—then responsibility for the history of the universe is entirely on us.

Additional Resources:

  • Slate Star Codex review of The Precipice
  • Slate Star Codex review of Human Compatible
  • My book review of The Coming Wave