A review of a fascinating book on the challenge of artificial intelligence: “The Coming Wave” by Mustafa Suleyman

Suleyman is a British entrepreneur specialising in artificial intelligence (AI). He co-founded two pioneering AI companies, Deepmind and Inflection AI, before working for Google and then Microsoft. His book on AI has been an international bestseller which has been recommended reading by Bill Gates. I was afraid that it might be technical, but it is not technical at all (I would actually have welcomed an explanation of how AI works) and the book is an easy read (Suleyman had the support of a writer).

Overwhelmingly, the book is about AI which is defined simply as “the science of teaching machines to learn humanlike capabilities”, but the coming wave refers to an emerging cluster of related technologies centred on AI and synthetic biology and including robotics, quantum computing and nanotechnology. Of course, the history of humankind has been shaped by a succession of new technologies, so what is different about the coming wave?

Suleyman identifies four unique characteristics: asymmetry, so that, in a colossal transfer of power, individuals and groups can challenge corporations and governments for good or for ill; hyper-evolution, so that changes and improvements occur at incredible speed with endless acceleration; omni-use, so that the same technologies can be used for many different purposes in many different sectors; and autonomy, so that these technologies are largely beyond our ability to comprehend at a granular level.

The book describes some dramatic cases of AI success, such as: in the case of the Asian game Go (which has a board enabling 10 to the power of 170 possible configurations), the 2016 defeat of Lee Sedol, a virtuoso world champion, by the Deepmind program AlphaGo and the release in November 2022 of the chatbot ChatGPT, developed by the AI research company OpenAI, which had more than a million users within a week. 

There are many examples of current and likely future uses of AI: self-driving cars, trucks and tractors, improved diagnosis of illness, personalised medicine and rapid development of new drugs, more efficient management of electricity grids and water systems and the development of sources of clean energy, modelling of such complexities as climate change and fusion reactions. 

And Suleyman is not shy of thinking the unthinkable by postulating various hypothetical nightmare scenarios: use of swarms of drones, spraying devices and bespoke pathogens by mass murderers or terrorist groups, robots equipped with facial recognition, DNA sequencing and automatic weapons or the evolution of a super-intelligence that cannot be controlled by humans and sees humans as a threat. 

He asserts: “We are going to live in an epoch when the majority of our daily interactions are not with other people but with AIs.” and “The blunt truth is that nobody knows when, if or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values.”

He spends a lot of time writing about containment which he defines as “the ability to monitor, curtail, control, and potentially even close down technologies”. A final chapter is entitled “The Steps Towards Containment” and, while there are some good ideas here, most rely on the will of companies, regulators and governments to take actions which are profoundly at odds with the world in which we live. 

So, for example, he suggests a body a little like the International Atomic Energy Authority which he calls the AI Audit Authority operating at a global level and yet, the week that I finished reading this book, at the AI Action summit in Paris (February 2025), both the US and the UK refused to sign a declaration that called on all countries to ensure “safe, secure and trustworthy” AI technology. 

Somehow Suleyman manages to conclude: “While there is compelling evidence that containment is not possible, temperamentally I remain an optimist.”