When Machines Run Amok

The author was taken aback when he observed an AI program teach itself to play an arcade game—and play it much better than its human designers. Frank Rose reviews “Life 3.0,” by Max Tegmark of MIT.

August 29, 2017

Photo: Getty Images

COSMOLOGISTS TAKE ON the big questions, and in Life 3.0  Max Tegmark addresses what may be the biggest of them all: What happens when humans are no longer the smartest species on the planet—when intelligence is available to programmable objects that have no experience of mortal existence in a physical body? Science fiction poses such questions frequently, but Mr. Tegmark, a physicist at MIT, asks us to put our Terminator fantasies aside and ponder other, presumably more realistic, scenarios. Among them is the possibility that a computer program will become not just intelligent but wildly so—and that we humans will find ourselves unable to do anything about it.
Mr. Tegmark’s pre­vious book, Our Math­ematical Uni­verse (2014), put a hugely debatable spin on the already counter­intui­tive notion that there exists not one uni­verse but a multi­tude. Not all math­emati­cians were impressed. Life 3.0 will be no less contro­­versial among com­puter scien­tists. Lucid and engag­ing, it has much to offer the general reader. Mr. Teg­mark’s explana­tion of how elec­tronic cir­cuitry—or a human brain—could produce some­thing so evanescent and immate­rial as thought is both elegant and enlight­ening. But the idea that a machine-based super­intelligence could somehow run amok is fiercely resisted by many com­puter scien­tists, to the point that people asso­ciated with it have been attacked as Luddites.

Books: Digital Life

Learning to Live With AI

Co-intelligence, by Ethan Mollick

The Wall Street Journal  |  April 3, 2024

Swept Away by the Stream

Binge Times, by Dade Hayes and Dawn Chmielewski

The Wall Street Journal  |  April 22, 2022

After the Disruption

System Error, by Rob Reich, Mehran Sahami and Jeremy Weinstein

The Wall Street Journal  |  Sept. 23, 2021

The New Big Brother

The Age of Surveillance Capitalism, by Shoshana Zuboff

The Wall Street Journal  |  Jan. 14, 2019

The Promise of Virtual Reality

Dawn of the New Everything, by Jaron Lanier, and Experience on Demand, by Jeremy Bailenson

The Wall Street Journal  |  Feb. 6, 2018

When Machines Run Amok

Life 3.0, by Max Tegmark

The Wall Street Journal  |  Aug. 29, 2017

The World’s Hottest Gadget

The One Device, by Brian Merchant

The Wall Street Journal  |  June 30, 2017

Confronting the End of Privacy

Data for the People, by Andreas Weigend, and The Aisles Have Eyes, by Joseph Turow

The Wall Street Journal  |  Feb. 1, 2017

We’re All Cord Cutters Now

Streaming, Sharing, Stealing, by Michael D. Smith and Rahul Telang

The Wall Street Journal  |  Sept. 7, 2016

Augmented Urban Reality

The City of Tomorrow, by Carlo Ratti and Matthew Claudel

The New Yorker  |  July 29, 2016

Word Travels Fast

Writing on the Wall, by Tom Standage

The New York Times Book Review  |  Nov. 3, 2013
Yet the notion enjoys more credence today than it did a few years ago, partly thanks to Mr. Tegmark. Along with Elon Musk, Stephen Hawk­ing and the Ox­ford philo­so­pher Nick Bos­trom, he has emerged as a leading pro­ponent of “AI safety” research, which focuses on such critical mat­ters as how to switch off intelli­gent machines before things get out of hand.
In March 2014 he co-founded the Boston-based Future of Life Institute to support work on the subject, and soon after he helped stage a con­ference at which AI researchers from around the world agreed that they should work not just to advance the field of artificial intelligence but to benefit humankind. This past January, he helped draw up a 23-point state­ment of prin­ciples that has been embraced by some 1,200 people in AI, among them the authors of the leading textbook on the subject and the founders of DeepMind, the Google-owned company whose Alpha­Go pro­gram defeated one of the world’s top Go players last year in South Korea.
The issue is certainly timely. After decades in which artificial intelligence promised much and delivered little, recent break­throughs in such target areas as facial recognition, auto­matic translation and self-driving cars have brought AI out of the woods. Amazon, Alphabet, Facebook, Tesla and Uber are making huge invest­ments in AI research, as are Baidu and Alibaba in China. Where all this will take us is the broader focus of Mr. Teg­mark’s book.

LIFE 3.0: Being Human in the Age of Artificial Intelligence
by Max Tegmark
Knopf, 384 pages, $28

Though he sees widespread benefits in fields ranging from medical diagnosis to power-grid management, Mr. Teg­mark devotes the bulk of Life 3.0 to how things could go wrong. Most immediate is the threat of unemployment, starting perhaps among Uber drivers before eventu­ally spreading to com­puter scientists whose machines have learned to program them­selves. Even more discon­cert­ing is the threat of an arms race involving cheap, mass-produced autono­mous weapons. As Mr. Teg­mark points out, “there isn’t much difference between a drone that can deliver Amazon packages and one that can deliver bombs.” Actually, bombs are crude com­pared with what AI could deliver once it has been weapon­ized: Think drones the size of bumble­bees that could be pro­grammed to kill certain people, or certain categories of people, by grabbing their skulls with tiny metal talons and drilling into their heads.
As horrific as that possi­bility may sound, it wouldn’t threaten the existence of the human species. Superintelligence might. No one really knows if a machine will ever develop the general-purpose intelligence that would be required. But in 2014 Mr. Tegmark caught a glimpse of how it might. He was watching a Deep­Mind program as it learned to play Break­out, a ’70s arcade game. The object of the game is to break through a wall by bouncing a ball off it repeatedly, knocking out a brick with every hit. At first the AI was hope­less. But it quickly got better, and before long it devised a relent­lessly effective tech­nique that none of the humans at Deep­Mind had thought of. It went on to learn 49 different arcade games, including Pong and Space Invaders, beating its human testers on more than half of them. Obvi­ously it’s a very long way from vin­tage arcade games to general intelli­gence, let alone con­scious­ness. But if a com­puter pro­gram can teach itself to play games, it might be able to teach itself many other things as well—slowly at first, then faster and faster.
What would that mean for humans? Nobody knows, including—as he freely admits—Mr. Tegmark. Like horses after the invention of the internal-combustion engine, we might be kept on as show animals—although Mr. Teg­mark’s observa­tion that the U.S. horse popula­tion fell almost 90% between 1915 and 1960 is not exactly heart­ening. He presents a dozen or so other scenarios as well. Would an omni­scient AI act as a “protector god,” max­imizing human happi­ness while allowing us the illu­sion that we’re still in con­trol? Would it decide we’re a threat and wipe us out?
It’s impossible to know that either. By failing either to refute or champion the bulk of these possible futures, Mr. Teg­mark makes the whole exercise seem divorced from reality. But he means it as a challenge: Rather than our being told what is going to happen, he wants us to decide what we want to happen. This sounds quite noble, if a tad naive—until he invites us to debate the issue on a web site that is chock­ablock with promo material for the book. There’s a place for self-promotion, just as there’s a place for killer-robot movies—but does either really con­tribute to our understanding of what humanity faces? ◼︎

More Essays