Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access
August 2015 Issue [Readings]

The Transhuman Condition

Adjust

By John Markoff, from Machines of Loving Grace, out this month from Ecco Books. Markoff has been a technology and business reporter for the New York Times since 1988.

Bill Duvall grew up on the peninsula south of San Francisco. The son of a physicist who was involved in classified research at Stanford Research Institute (SRI), a military-oriented think tank, Duvall attended UC Berkeley in the mid-1960s; he took all the university’s computer-programming courses and dropped out after two years. When he joined the think tank where his father worked, a few miles from the Stanford campus, he was assigned to the team of artificial-intelligence researchers who were building Shakey.

Although Life magazine would later dub Shakey the first “electronic person,” it was basically a six-foot stack of gear, sensors, and motorized wheels that was tethered — and later wirelessly connected — to a nearby mainframe. Shakey wasn’t the world’s first mobile robot, but it was the first that was intended to be truly autonomous. It was designed to reason about the world around it, to plan its own actions, and to perform tasks. It could find and push objects and move in a planned way in its highly structured world.

At both SRI and the nearby Stanford Artificial Intelligence Laboratory (SAIL), which was founded by John McCarthy in 1962, a tightly knit group of researchers was attempting to build machines that mimicked human capabilities. To this group, Shakey was a striking portent of the future; they believed that the scientific breakthrough that would enable machines to act like humans was coming in just a few short years. Indeed, among the small community of AI researchers who were working on both coasts during the mid-Sixties, there was virtually boundless optimism.

But the reality disappointed Duvall. Shakey lived in a large open room with linoleum floors and a couple of racks of electronics. Box-like objects were scattered around for the robot to “play” with. Shakey’s sensors would capture its environment and then it would “think” — standing motionless for minutes on end — before moving. Even in its closed and controlled world, the robot frequently broke down or drained its batteries after just minutes of operation.

Down the hall from the Shakey laboratory, another research group, led by computer scientist Doug Engelbart, was building a computer to run a program called NLS — the oN-Line System. Most people who know of Engelbart today know him as the inventor of the mouse. But the mouse, to Engelbart, was simply a gadget to improve our ability to interact with computers. His more encompassing idea was to use computer technologies to make it possible for small groups of scientists, engineers, and educators to “bootstrap” their projects by employing an array of ever more powerful software tools to organize their activities and create a “collective I.Q.” that outstripped the capabilities of any single individual. During World War II, Engelbart had stumbled across an article by Vannevar Bush that proposed a microfilm-based information-retrieval system called Memex to manage all of the world’s knowledge. He realized that such a system could be assembled with computers.

The cultural gulf between McCarthy’s artificial intelligence and Engelbart’s contrarian NLS was already apparent to those on either side. When Engelbart visited MIT to demonstrate his project, prominent AI researcher Marvin Minsky complained that he was wasting research dollars on a glorified word processor. But the idea captivated Bill Duvall. Before long he switched his allegiance and moved down the hall to work in Engelbart’s lab.

Late on the evening of October 29, 1969, Duvall connected the NLS system in Menlo Park, via a data line leased from the phone company, to a computer controlled by another young hacker in Los Angeles. It was the first time that two computers connected over the network that would become the Internet. Duvall’s leap from the Shakey laboratory to Engelbart’s NLS made him one of the earliest people to stand on both sides of a line that even today distinguishes two rival engineering communities. One of these communities has relentlessly pursued the automation of the human experience — artificial intelligence. The other, human-computer interaction — what Engelbart called intelligence augmentation — has concerned itself with “man-machine symbiosis.” What separates AI and IA is partly their technical approaches, but the distinction also implies differing ethical stances toward the relationship of man to machine.

During the 1970s and 1980s the field of artificial intelligence drew a generation of brilliant engineers, but it often disappointed them in much the way that it had disappointed Duvall. Like him, many of these engineers turned to the contrasting ideal of intelligence augmentation. But today, AI is beginning to meet some of the promises made for it by SAIL and SRI researchers half a century ago, and artificial intelligence is poised to have an impact on society that may be greater than the effect of personal computing and the Internet.

Although their project has now largely been forgotten, the designers of Shakey pioneered computing technologies that are now used by more than a billion people. The mapping software in our cars and our smartphones is based on techniques the team first developed. Their A* algorithm is the best-known way to find the shortest path between two locations. Toward the end of the Shakey project, speech control was added as a research task; Apple’s Siri, whose name is a nod to SRI, is a distant descendent of the machine that began life as a stack of rolling sensors and actuators.

While Engelbart’s original research led directly to the PC and the Internet, McCarthy’s lab did not provide a single dramatic breakthrough. Rather, the falling costs of sensors, computer processing, and information storage, along with the gradual shift away from symbolic logic and toward more pragmatic statistical and machine-learning algorithms, have made it possible for engineers and programmers to create computerized systems that see, speak, listen, and move around in the world.

As a result, AI has been transformed from an academic curiosity into a force that is altering countless aspects of the modern world. This has created an increasingly clear choice for designers — a choice that has become philosophical and ethical, rather than simply technical: will we design humans into or out of the systems that transport us, that grow our food, manufacture our goods, and provide our entertainment?

As computing and robotics systems have grown from laboratory curiosities into the fabric that weaves together modern life, the AI and IA communities have continued to speak past each other. The field of human-computer interface has largely operated within the philosophical framework originally set down by Engelbart — that computers should be used to assist humans. In contrast, the artificial-intelligence community has for the most part remained unconcerned with preserving a role for individual humans in the systems it creates.

Terry Winograd was one of the first to see the two extremes clearly and to consider their consequences. As a graduate student at MIT in the 1960s, Winograd studied human language in order to build a software robot that was capable of interacting with humans in conversation. During the 1980s, he was part of a small group of AI researchers who engaged in seminars at Berkeley with the philosophers Hubert Dreyfus and John Searle. The philosophers persuaded Winograd that there were real limits to the capabilities of intelligent machines. In part because of his changing views, he left the field of artificial intelligence.

A decade later, as the faculty adviser for Google cofounder Larry Page, Winograd counseled the young graduate student to focus on Web search rather than more far-fetched technologies. Page’s original PageRank algorithm, the heart of Google’s search engine, can perhaps be seen as the most powerful example of human augmentation in history. The algorithm systematically collected human decisions about the value of information and pooled those decisions to prioritize search results. Although some criticized the process for siphoning intellectual labor from vast numbers of unwitting humans, the algorithm established an unstated social contract: Google mined the wealth of human knowledge and returned it in searchable form to society, while reserving for itself the right to monetize the results.

Since it established its search box as the world’s most powerful information monopoly, Google has yo-yoed between IA and AI applications and services. The ill-fated Google Glass was intended as a “reality-augmentation system,” while the company’s driverless-car project represents a pure AI — replacing human agency and intelligence with a machine. Recently, Google has undertaken what it loosely identifies as “brain” projects, which suggests a new wave of AI.

In 2012, Google researchers presented a paper on a machine-vision system. After training itself on 10 million digital images taken from YouTube videos, the system dramatically outperformed previous efforts at an automated-vision network, roughly doubling their accuracy in recognizing objects from a list of 20,000 distinct items. Among other things, the system taught itself to recognize cats — perhaps not surprising, given the overabundance of cat videos on YouTube — with a mechanism that the scientists described as a cybernetic cousin to what takes place in the brain’s visual cortex. The experiment was made possible by Google’s immense computing resources, which allowed researchers to turn loose a cluster of 16,000 processors on the problem — though that number still, of course, represented a tiny fraction of the billions of neurons in a human brain, a huge portion of which are devoted to vision.

Speculation about whether Google is on the trail of a genuine artificial brain has become increasingly rampant. There is certainly no question that a growing group of Silicon Valley engineers and scientists believe themselves to be closing in on “strong” AI — the creation of a self-aware machine with human or greater intelligence.

Whether or not this goal is ever achieved, it is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society.

In Silicon Valley it is fashionable to celebrate this development, a trend that is most clearly visible in organizations like the Singularity Institute and in books like Kevin Kelly’s What Technology Wants (2010). In an earlier book, Out of Control (1994), Kelly came down firmly on the side of the machines:

The problem with our robots today is that we don’t respect them. They are stuck in factories without windows, doing jobs that humans don’t want to do. We take machines as slaves, but they are not that. That’s what Marvin Minsky, the mathematician who pioneered artificial intelligence, tells anyone who will listen. Minsky goes all the way as an advocate for downloading human intelligence into a computer. Doug Engelbart, on the other hand, is the legendary guy who invented word processing, the mouse, and hypermedia, and who is an advocate for computers-for-the-people. When the two gurus met at MIT in the 1950s, they are reputed to have had the following conversation:

minsky: We’re going to make machines intelligent. We are going to make them conscious!

engelbart: You’re going to do all that for the machines? What are you going to do for the people?

This story is usually told by engineers working to make computers more friendly, more humane, more people centered. But I’m squarely on Minsky’s side — on the side of the made. People will survive. We’ll train our machines to serve us. But what are we going to do for the machines?

But to say that people will “survive” understates the possible consequences: Minsky is said to have responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, they’ll keep us as pets.”

Until recently, the artificial-intelligence community has largely chosen to ignore the ethics of systems that they consider merely powerful tools. When I asked one engineer who is building next-generation robots about the impact of automation on people, he told me, “You can’t think about that; you just have to decide that you are going to do the best you can to improve the world for humanity as a whole.”

AI and machine-learning algorithms have already led to transformative applications in areas as diverse as science, manufacturing, and entertainment. Machine vision and pattern recognition have been essential to improving quality in semiconductor design. Drug-discovery algorithms have systematized the creation of new pharmaceuticals. The same breakthroughs have also brought us increased government surveillance and social-media companies whose business model depends on invading privacy for profit.

Optimists hope that the potential abuses of our computer systems will be minimized if the application of artificial intelligence, genetic engineering, and robotics remains focused on humans rather than algorithms. But the tech industry has not had a track record that speaks to moral enlightenment. It would be truly remarkable if a Silicon Valley company rejected a profitable technology for ethical reasons. Today, decisions about implementing technology are made largely on the basis of profitability and efficiency. What is needed is a new moral calculus.


| View All Issues |

August 2015

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug