The introduction is aptly sub-titled “Hard calls and easy calls”, a distinction which the authors also painstakingly explain in each chapter.

Hundreds of AI scientists signed an open letter in 2023 that stated that “Mitigating the risk of extinction from AI should be a global priority”.

AI research has yielded jump after jump in the last 10 years and it’s an easy call to make that we will get to ASI (Artificial SuperIntelligence) – which would be endgame for humanity – although it’s a hard call to predict exactly when that will happen.

The authors are co-leaders of the Machine Intelligence Research Institute (MIRI), and were initially some of the very scientists trying to create ASI before they realized that the way AI was being built was not going to end well for us.

They remind us in some cases, and inform us in others, of several times in the history of the planet where the prevailing life of that time expected the world to keep looking the way it did: Agriculture (which moved us from an egalitarian culture to one of hoarding and hierarchy), Nazism (which meant that certain types of people would not be safe anymore), and one of the earliest ones that I was unaware of:

The Great Oxidation Event or Oxygen Catastrophe.

Roughly 2.4 billion years ago, cyanobacteria started photosynthesizing and creating so much oxygen that the prevailing anaerobic bacteria were literally poisoned to death. Hard to imagine, and a sobering thought, that what we consider absolutely crucial to life caused a mass extinction event for a different kind of life.

The authors end with using the example of nuclear weapons; how, despite building a fearsome arsenal of nuclear weapons, world leaders did come together to create resilient systems around not starting nuclear wars and it’s time to come together to do the same for halting (the current way of) AI development before it’s too late.


Leave a Reply

Your email address will not be published. Required fields are marked *