The Dilemma -- Book Review: The Coming Wave

The best book I've read in a long time. Absolutely required reading for anyone concerned about technological change from one of the luminaries of AI.

The Dilemma -- Book Review: The Coming Wave

This is the best book I’ve read in a long time.  It may even be wrong on many of its key points, but that doesn’t matter.

It’s not often that I recommend a book before I even finish reading it.  Since I average around 5 days a book, I’d have to be very excited to share it.  It’s not often I read a book (the week it comes out) and eagerly anticipate what The Economist has to say about it.  It’s even less often that The Economist devotes their next cover to that topic, including an interview with the author, the managing editor, and a renowned historian.  

The book is The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma by Mustafa Suleyman, co-founder of Deepmind, one of there premier AI labs on the planet (and now part of Alphabet).  Many books can be accused of overly-dramatic subtitles, but in this case I think it’s simply spot-on.  The coming wave will change everything, with dramatic consequences in the next 5 to 20 years.  I’m reminded of Bill Gates’s quote that people overestimate the change in the next two years but underestimate the change in the next ten.  Despite the hype around AI, I think people are dramatically underestimating the change that will happen.  

The wave itself has 2 components.  First, what is called AI in public media and is receiving a lot of attention.  Second, synthetic biology, whose implications are staggering but receives very little attention by comparison.  The two together form the coming wave (or waves) that will reshape modern life.  AI is about mechanizing and scaling intelligence, removing barriers to intelligence-based tasks.  Synthetic biology is about doing the same at the most powerful substrate available to us: molecular.  The specific path we will follow is unknown, but the degree of possibility is unprecedented.

The first section of the book discusses the idea of containment: can we contain these technologies in a responsible way, similar to what we did with nuclear weapons?  No.  Suleyman talks in depth about the impossibility of full containment and what we should do in light of this.  Technologically, containment is nearly impossible because: 

  • There are huge power asymmetries involved
  • The underlying technology is hyper-evolving
  • The technology is omni-use (it’s truly general purpose)
  • The resultant systems are highly autonomous

These properties mean that the technology progresses very easily.  Additionally, incentives are very strong for the continued development of AI and synthetic biology: we are amidst a great power competition where non-dominant powers have every incentive to up-end existing power structures (more on that later).  Add to this the open research culture that has shaped the progression of technology.  The best researchers increasingly choose to work at labs where they can publish their work.  Our track record of containment is also very poor.

So, containment is impossible.

That means we will get large advances in both AI and synthetic biology, some of which are complementary.  The implications of human-level AI, which will happen since we are not containing it, are truly profound.  Traditional parenting advice is to invest in education, so that your child can have a good life, secure in their ability to contribute and make sense of the world.  The application of cognitive resources may no longer be something a human can do more efficiently than a machine.  Where would that leave us?  What would it mean to be human?  Would we be something more similar to bots in a simulation, no longer directing our lives with true agency–only wants and desires?  We may soon be in a world where intelligence is no longer the scarce component of getting something done.  I don’t know what that world looks like.

The implications of synthetic biology are equally profound.  Perhaps most narrowly, we’ll start to hyper-evolve as a species (not just technologically).  Suleyman posits that the next dimension of inequality will be at the biological level–between enhanced and unenhanced humans.  Thinking more broadly, we can likely engineer organisms to reorder the natural world.  Optimistically, we could solve global warming, process waste, and convert many input substances to something usable (such as carbon waste into plastics and back).  Coupled with advances in AI, we will much better understand the biological world.  Already we have seen DeepFold predict the structure of 200 million proteins, up from 190,000 before.  Increasingly we’ll be able to direct the development at the biological level.  Terrifyingly, this includes pathogens.

A theme of these changes is a deep shift of power away from the center.  A personalized AI assistant will help you get things done, whatever that thing may be.  You will need the state less than ever before.  This will likely lead to a reordering of the social contract that we stumbled into during the 20th century, one anchored around the capability of scientific manufacturing.  The scary outcome here is a technological dystopia: one that you could argue China is attempting to pursue by reacting to change with a strong assertion of control, technologically assisted.  And while I think this would be a tragedy, there are good reasons to attempt to introduce some measure of control.

Existential risk (x-risk) is an interesting topic.  In one framing, the goal of a lot of x-risk mitigation is to keep the set of people who have the ability to kill a billion people completely distinct from the ones with the motivation to do so.  Suleyman gives the example of Aum Shinrikyo – the doomsday Japanese cult – that carried out sarin gas attacks in the 1990s.  In today’s world, they would be much more capable.  The advance of technology, and diffusion of power, is going to greatly expand that first set.  We need to invest in making it harder for them to succeed.  Suleyman concludes his book with a 10-point set of things we could try.  I encourage you to read the book itself to really understand the points he makes there.  The gist is that we should invest heavily, societally, in responsible development and buy ourselves some time as well as shifting the culture.

Where do I think this is all headed?  I feel confident in saying that we’re entering a time of great uncertainty.  But I don’t feel confident predicting exactly how this will all turn out.  What will be scarce?  What will matter?  What will it mean to be human, especially if many are engineering their very genome while others are not?  Are we headed to a Gattaca-like society?  At a simpler level, I do feel confident that individual AI assistants will transform life.  The easiest thing to picture is that your phone can also act as a very capable admin, helping you run your life.  This is just a point-solution, though.  I also think it will be much easier to do something that you want to–think of this as an extension of how many DIY projects the internet enables, especially with high quality YouTube videos to tell you how.  I also feel fairly confident that the ability to code will not be an especially valuable skill 30 years from now, but perhaps thinking logically (which code trains) may still be valuable.  I also think that the ability to discern truth from fiction will be a key life skill.  Past that, I don’t know.  This is a question I wrestle with and will continue to do so.

In summary, I think The Coming Wave is a rare gem of a book.  It asks the deepest first-principle questions of what is happening and what will happen to society.  It pulls from a surprising range of disciplines: technology, science, AI, biology, economics, political science, sociology, history and psychology.  Perhaps it’s no surprise that Suleyman sits on the board of the Economist–the result is a book that feels like the most thoughtful, comprehensive yet digestible synthesis of where we are today and where we might go.