If Anyone Builds It, Everyone Dies

Why AI Is on Track to Kill Us All—and How We Can Avert Extinction

Contributors

By Eliezer Yudkowsky

By Nate Soares

Formats and Prices

Price

$30.00

Price

$40.00 CAD

Format:

  1. Hardcover $30.00 $40.00 CAD
  2. Audiobook Download (Unabridged) $24.99

An urgent warning from two artificial intelligence insiders on the reckless scramble to build superhuman AI—and how it will end humanity unless we change course.
 
In 2023, hundreds of machine-learning scientists signed an open letter warning about our risk of extinction from smarter-than-human AI. Yet today, the race to develop superhuman AI is only accelerating, as many tech CEOs throw caution to the wind, aggressively scaling up systems they don’t understand—and won’t be able to restrain.  There is a good chance that they will succeed in building an artificial superintelligence on a timescale of years or decades. And no one is prepared for what will happen next.
 
For over 20 years, two signatories of that letter—Eliezer Yudkowsky and Nate Soares— have been studying the potential of AI and warning about its consequences. As Yudkowsky and Soares argue, sufficiently intelligent AIs will develop persistent goals of their own: bleak goals that are only tangentially related to what the AI was trained for; lifeless goals that are at odds with our own survival. Worse yet, in the case of a near-inevitable conflict between humans and AI, superintelligences will be able to trivially crush us, as easily as modern algorithms crush the world’s best humans at chess, without allowing the conflict to be close or even especially interesting.  
 
How could an AI kill every human alive, when it’s just a disembodied intelligence trapped in a computer? Yudkowsky and Soares walk through both argument and vivid extinction scenarios and, in so doing, leave no doubt that humanity is not ready to face this challenge—ultimately showing that, on our current path, If Anyone Builds It, Everyone Dies.

On Sale
Sep 30, 2025
Page Count
256 pages
ISBN-13
9780316595643

Eliezer Yudkowsky

About the Author

ELIEZER YUDKOWSKY is one of the founding researchers of the field of AGI alignment, which is concerned with understanding how smarter-than-human intelligences think, behave, and pursue their goals.  He appeared on TIME magazine’s list of the 100 Most Influential People In AI, was one of the twelve public figures featured in The New York Times’s “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement,” and was one of the seven thought leaders spotlighted in The Washington Post’s discussion of “AI’s Rival Factions.”  He spoke on the main stage at 2023’s TED conference and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, and many other venues.  He has close to 200,000 followers on X, where he frequently dialogues with prominent public figures including the heads of frontier AI labs.
 
NATE SOARES is the President of MIRI. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, has been interviewed in Vanity Fair and the Financial Times, and has spoken on conference panels alongside many of the AI field’s leaders.

Learn more about this author