Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access
October 2002 Issue [Article]

Programming the Post-Human

Adjust
Computer science redefines "life"
There are times when you feel you are witnessing the precise moment when science fiction crosses over into science. I don’t mean the advent of gadgetry like voice-recognition software or flat-panel computer monitors, or even the coming of space tourism, but a moment when one of the great, enduring conundrums of our speculative literature suddenly materializes as an actual human situation. Perhaps the greatest of these is the problem of distinguishing humans from robots—what unique element, if any, separates us from machines?—a theme that received its classic treatment by Isaac Asimov in his 1946 story “Evidence.” In the story, a future society has forbidden the use of humanoid robots, since it is feared that the intelligent machines, identical in appearance to humans but with superior powers, will take over the world. A man named Stephen Byerley, who was suspected of being a robot, eventually does just that, becoming the first “World Coordinator.” Many years later, however, his humanity is still in doubt:
 

I stared at her with a sort of horror, “Is that true?”

“All of it,” she said.

“And the great Byerley was simply a robot.”

“Oh, there’s no way of ever finding out. I think he was. But when he decided to die, he had himself atomized, so that there will never be any legal proof. Besides, what difference would it make?”

In January 2002, fifty-six years after the publication of Asimov’s story, a group of computer scientists, cryptographers, mathematicians, and cognitive scientists gathered at “the first workshop on human interactive proofs,” where their goal was the creation of a CAPTCHA, a “Completely Automated Probabilistic Public Turing Test to Tell Computers and Humans Apart.” In Asimov’s story, distinguishing robots from humans was a matter of world-historical importance, a question of human dignity and worth; the problem for the scientists at the workshop was the development of automated methods to prevent software robots, or “bots,” from invading chat rooms and barraging email systems with unwanted “spam” messages. Thus fantasy leaks into everyday life, a grand vision of the future reduced to a pressing, if mundane, commercial problem: how to tell human from machine.

What is interesting about this problem is that it’s one we humans have brought on ourselves. It is not a contest of human versus machine, though it is often presented that way; it is instead an outgrowth of what is most deeply human about us as Homo faber, the toolmaker. We have imagined the existence of robots, and having dreamed them up we feel compelled to build them, and to endow them with as much intelligence as we possibly can. We can’t help it, it seems; it’s in our nature as fashioners of helpful (and dangerous) objects. We can’t resist taking up the dare: Can we create tools that are smarter than we are, tools that cease in crucial ways to be “ours”?

Underlying that dare is a philosophical shift in the scientific view of humanity’s role in the great project of life. Researchers in robotics and artificial life (also known as “Alife,” the branch of computer science that concerns itself with the creation of software exhibiting the properties of life) openly question the “specialness” of human life. Some call life as we know it on Earth merely one of many “possible biologies,” and see our reverence for humanity as something of a prejudice (“human chauvinism”). Personhood has been defined as “a status granted to one another by society, not innately tied to being a carbon-based life form.” According to Rodney Brooks, director of MIT’s artificial-intelligence lab, evolution spelled the end of our uniqueness in relation to other living creatures by defining us as evolved animals; and robotics, in its quest to create a sentient machine, looks forward to ending the idea of our uniqueness in relation to the inanimate world. In what may reflect supreme humility (we are no better than the rocks or the apes) or astounding hubris (we can create life without the participation of either God or the natural forces of evolution), computer science has initiated a debate over the coming of the “posthuman”: a nonbiological, sentient entity.

According to this idea, the post-human’s thoughts would not be limited by the slow speed of our own nervous systems. Unhampered by the messy wet chemistry of carbon-based life, loosed from the random walk of evolution, the post-human can be designed, consciously, to exceed our capabilities. Its memory can be practically limitless. It can have physical strength without bounds. And, freed from the senescence of the cells, it might live forever. If this sounds like Superman (“with powers and abilities far beyond those of mortal man”), consider another of those moments when science fiction passes over into science:

The date was April 1, 2000. The place was a lecture hall on the campus of Stanford University. Douglas Hofstadter, the computer scientist perhaps best known for his book Gödel, Escher, Bach, assembled a panel of roboticists, engineers, computer scientists, and technologists, and asked them to address the question: “Will spiritual robots replace humanity by 2100?”

Despite the date, it was not an April Fools’ joke. Hofstadter began by saying, crankily, that he had “decided to eliminate naysayers” from the panel, making his point with a cartoon of a fish that thinks it is ridiculous that life could exist on dry land (“gribbit, gribbit,” went the sound of a frog). “It is more amazing,” he said, “that life could come from inert matter than from a change of substrate”—more amazing that life could arise from a soup of dead molecules than change its base from carbon to something else; silicon, for example. Hofstadter looked into the future and said, without nostalgia or regret: “I really wonder whether there will be human beings.”

The room was filled to fire-marshal-alarming proportions. People jammed the doors, stood against the walls, sat in the aisles, on the steps in the steep balcony of the lecture hall, leaned dangerously against the balcony rails. The audience, young and old, students and “graybeards” of the computing community of Silicon Valley, sat still and quiet, leaning forward, putting up with the crowding and the heat and Doug Hofstadter’s grouchy refusal to use a microphone. Sitting, as I was, high up in the balcony, the scene reminded me of nothing so much as those paintings of early medical dissections, crowds of men peering down to where the cadaver lay slashed open in the operating theater below. That day at Stanford there was the same sense that some threshold, previously taboo to science, had been crossed. Computer science, which heretofore had served humanity by creating its tools, was now considering another objective altogether: the creation of a nonbiological, “spiritual” being—sentient, intelligent, alive—who could surpass and, perhaps, control us.

This was not the first time computer science thought it was on the verge of creating a successor race of machines. When I was a young programmer in the late 1970s and early 1980s, a branch of computer science then called “artificial intelligence” believed that it was close to creating an intelligent computer. Although AI would fail spectacularly in fulfilling its grand expectations, the debate surrounding the field was alluring. Like many at the time, I saw in AI the opportunity to explore questions that had previously been in the province of the humanities. What are we? What makes a human intelligent? What is consciousness, knowledge, learning? How can these things be represented to a machine, and what would we learn about ourselves in the formation of that representation? It was clear that as members of a secular society that has given up on the idea of God we would be looking elsewhere for the source of what animates us, and that “elsewhere” would be the study of cybernetic intelligence, the engine of postmodern philosophical speculation.

It is for this reason that the question of the post-human is worth exploring. Whether or not we can build a “spiritual robot” by 2100, in asking what is “post” human, we must first ask what is human. The ensuing debate inherits the questions that once belonged almost exclusively to philosophy and religion—and it inherits the same ancient, deep-seated confusions.

Over the years, as I listened to the engineering give-and-take over the question of artificial life-forms, I kept coming up against something obdurate inside myself, some stubborn resistance to the definition of “life” that was being promulgated. It seemed to me too reductive of what we are, too mechanistic. Even if I could not quite get myself to believe in God or the soul or the Tao or some other metaphor for the ineffable spark of life, still, as I sat there high in the balcony of the Stanford lecture hall, listening to the cyberneticists’ claims to be on the path toward the creation of a sentient being, I found myself muttering, No, that’s not right, we’re not just mechanisms, you’re missing something, there’s something else, something more. But then I had to ask myself: What else could there be?

Over the last half-century, in addressing the question “What are we humans?” cybernetics has come up with three answers. We are, in order of their occurrence in the debate, (1) computers, (2) ants, and (3) accidents.

The first, the co-identification of human sentience and the computer, appeared almost simultaneously with the appearance of computers. In 1950, only four years after the construction of ENIAC, generally regarded as the first digital computer, the mathematician Alan Turing famously proposed that digital machines could think. And by the time computers had come into general use, in the 1960s, the view of the human brain as an information processor was already firmly installed. It is an odd view, if you consider it. ENIAC was conceived as a giant calculator: it was designed to compute the trajectory of artillery shells. That is, its role was understood as human complement, doing well what we do poorly (tedious computation, precise recall of lists of numbers and letters) and doing badly what we do well (intuitive thinking, acute perception, reactions involving the complex interplay of mental, physical, and emotional states). Yet by 1969, when computers were still room-sized, heat-generating behemoths with block-letter-character screens, the computer scientist and Nobel Laureate in economics Herbert Simon did not seem inclined to explain his premise when he wrote: “The computer is a member of an important family of artifacts called symbol systems. .?.?. Another important member of the family (some of us think, anthropomorphically, it is the most important) is the human mind and brain.” Simon could begin a thought by saying, “If computers are organized somewhat in the image of man” without going on to question that “if.” In existence barely twenty-five years, the machine that was designed to be our other—the not-human, accurate in a world where to be human is to err—had become the very analogue of human intelligence, the image of man.

Simon, along with his colleague Allen Newell, was a pioneer in the field of artificial intelligence, and it is worthwhile now to look back at Simon’s important book The Sciences of the Artificial, for here one can see the origins of the curious reasoning whereby the computer becomes a model for humanity. Simon begins by discussing what on the surface might seem obvious: the difference between the natural and the artificial worlds. Natural objects have the authority of existence, he says; the “laws” of nature determine what must be. The artificial, in contrast, is designed or composed in light of what might and ought to be.

But then Simon’s reasoning takes an odd turn. He goes on to define artifacts as “an ‘interface’ in today’s terms—between an ‘inner’ environment, the substance and organization of the artifact itself, and an ‘outer’ environment, the surroundings in which it operates”: that is, as bodiless processes mediating between inner and outer environments. Then: “Notice that this way of viewing artifacts applies equally well to many things that are not man-made—to all things in fact that can be regarded as adapted to some situation; and in particular it applies to the living systems that have evolved through the forces of organic evolution.” By the sixth page of his book, where this statement appears, human beings have been removed from the realm of the “natural.” Viewed as adaptable products of evolution, we have become hollow artifacts, interfaces to our environment, engineered “systems.” It is a startling turnabout: what is being proposed here is not the possibility of creating artificial life but the redefinition of life itself as artificial.

Once you accept the definition of human life as artificial—designed, engineered—it is then an easy matter to say that the proper study of man is not man but some other engineered object, the machine. And this is indeed what Simon advocates: making the computer itself the object of study, as a phenomenon of a living system. “Since there are now many such devices in the world [computers], and since the properties that describe them also appear to be shared by the human central nervous system, nothing prevents us from developing a natural history of them. We can study them as we would rabbits or chipmunks and discover how they behave under different patterns of environmental stimulation.” Standing amazed before this human-created machine, the computer scientist declares it to be our very identity; consequently, to learn who and what we are, he advises that we study .?.?. the machine.

This circular idea—the mind is like a computer; study the computer to learn about the mind has infected decades of thinking in computer and cognitive science. We find it in the work of Marvin Minsky, the influential figure in artificial intelligence who, when asked if machines could think, famously answered: Of course machines can think; we can think and we are “meat machines.” And in the writings of cognitive scientist Daniel Dennett, whose book Consciousness Explained is suffused with conflations between human sentience and computers: “What counts as the ‘program’ when we talk of a virtual machine running on the brain’s parallel hardware? .?.?. How do these programs of millions of neural connection strengths get installed on the brain’s computer?” And in an extreme version in the predictions of Ray Kurzweil, the inventor of the Kurzweil music synthesizer and of reading systems for the blind, who sees the coming of “spiritual machines” almost entirely in the language of computer programming. Kurzweil calls memory the “mind file,” and looks forward to the day when we can scan and “download” the mind into a silicon substrate, analyzing it for its basic “algorithms,” thereby creating a “backup copy” of the original human being, all without the aid of natural evolution, which he calls “a very inefficient programmer.”

The limitations of this model of human intelligence should have become clear with the demise of AI, that first naive blush of optimism about the creation of sentient machines. In selecting the computer as the model of human thinking, AI researchers were addressing only one small portion of the mind: rational thought. They were, in essence, attempting a simulation of the neocortex—rule-based, conscious thinking—declaring that to be the essence of intelligence. And AI did prove successful in creating programs that relied upon such rule-based thinking, creating so-called expert systems, which codified narrow, specific, expert domains, such as oil exploration and chess playing. But the results of early AI were generally disappointing,[1] as the philosopher Hubert Dreyfus pointed out. They were systems devoid of presence and awareness, a “disturbing failure to produce even the hint of a system with the flexibility of a six-month-old child.”

[1] Some observers of the chess match between Garry Kasparov and IBM’s Deep Blue have used Deep Blue as an example of an AI program that achieved something of the presence we associate with sentience. “Kasparov reported signs of mind in the machine,” wrote Hans Moravec, the noted roboticist. I believe that there was indeed one game in which, probably due to some accidental combinatorial explosion, Deep Blue did not play like a machine, as Kasparov reported at the time. Kasparov then adjusted his play, looking for the strategies of that “mind,” which failed to reappear. This put Kasparov off his game, and he played rather badly (for him). The point is that the program had not attained sentience; the human had projected sentience onto the machine, and became flustered.

Yet the idea of human being as computer lives on. Most troubling, even after becoming controversial in computer science, it has taken up residence in the natural sciences. The notion of human as computational machine is behind the stubborn view of DNA as “code,” and it endures in the idea of the body as mechanism. As Rodney Brooks put it in an article in Nature: “The current scientific view of living things is that they are machines whose components are biomolecules.” The psychologist Steven Pinker, in the first pages of his classic book How the Mind Works, writes that the problems of understanding humans “are both design specs for a robot and the subject matter of psychology.” This view shows up in surprising places, for instance in an email I received from Lucia Jacobs, a professor of psychology at Berkeley who studies squirrel behavior: “I am an ethologist and know virtually nothing about computers, simulations, programming, mathematical concepts or logic,” she wrote. “But the research is pulling me right into the middle of it.” Herbert Simon’s views have come full circle; it is now standard scientific practice to study machine simulations as if they were indeed chipmunks, or squirrels. “What seems to be crystallizing, in short,” wrote Jacobs of her work with robots, “is a powerful outlook on spatial navigation, from robots to human reasoning. This is wonderful.” Psychology and cognitive science—and indeed biology—are thus poised to become, in essence, branches of cybernetics.

Failing to produce intelligence by modeling the “higher functions” of the cortex, cybernetics next turned to a model creature without any cortex at all: the ant. This seems like an odd place to look for human intelligence. Ants are not generally thought of as being particularly smart. But as a model they have one enormous advantage over human brains: an explanation of how apparent complexity can arise without an overseeing designer. A group of dumb ants produces the complexity of the ant colony—an example of organizational intelligence without recourse to the perennial difficulties of religion or philosophy.

Again, the source for this key idea seems to be Herbert Simon. The third chapter of The Sciences of the Artificial opens by describing an ant making its way across a beach:

We watch an ant make his laborious way across a wind- and wave-molded beach. He moves ahead, angles to the right to ease his climb up a steep dune let, detours around a pebble, stops for a moment to exchange information with a compatriot. Thus he makes his weaving, halting way back to his home. So as not to anthropomorphize about his purposes, I sketch the path on a piece of paper. It is a sequence of irregular, angular segments—not quite a random walk, for it has an underlying sense of direction, of aiming toward a goal.

I show the unlabeled sketch to a friend. Whose path is it? An expert skier, perhaps, slaloming down a steep and somewhat rocky slope. Or a sloop, beating upwind in a channel dotted with islands or shoals. Perhaps it is a path in a more abstract space: the course of a search of a student seeking the proof of a theorem in geometry.

The ant leaves behind it a complex geometric pattern. How? The ant has not designed this geometry. Simon’s revolutionary idea was to locate the source of the complexity not in the ant, which is quite simply “viewed as a behaving system,” but in the ant’s interaction with its environment: in the byplay of the ant’s simple, unaware reactions to complications of pebble and sand. Simon then goes on to pronounce what will turn out to be inspirational words in the history of cybernetics: “In this chapter, I should like to explore this hypothesis but with the word ‘human being’ substituted for ‘ant.’?”

With that, Simon introduces an idea that will reverberate for decades across the literature of robotics and artificial life. One can hardly read about the subject, or talk to a researcher, without coming upon the example of the ant—or the bee, or termite, or swarm, or some other such reference to the insect world. It’s not at all clear that the later adopters of “the ant idea” completely grasp the implications of Simon’s view; they seem to have dropped the difficulties of interaction with the environment, its vast complexity and variability, concentrating instead on low-level ant-to-ant communications. Yet what was distilled out of Simon’s utterance was a powerful model: An ant colony is an intricate society, but the complexity comes into being without a ruling “god” or mind to plan or direct it. The complicated order of the colony arises not from above, not from a plan, but from below, as a result of many “dumb,” one-to-one interactions between individual creatures.

This phenomenon, known as “emergence,” produces outcomes that cannot be predicted from looking only at the underlying simple interactions. This is the key idea in fields known, variously, as “complexity theory,” “chaos theory,” and “cellular automata.” It is also a foundational concept in robotics and Alife. “Emergence” lets researchers attempt to create intelligence from the bottom up, as it were, starting not from any theory of the brain as a whole but from the lowest-level elementary processes. The idea seems to be that if you construct a sufficient number of low-level, atomic interactions (“automata”) what will eventually emerge is intelligence—an ant colony in the mind.

Sentience is not a thing, according to this view, but a property that arises from the organization of matter itself. Hans Moravec writes:

Ancient thinkers theorized that the animating principle that separated the living from the dead was a special kind of substance, a spirit. In the last century biology, mathematics, and related sciences have gathered powerful evidence that the animating principle is not a substance, but a very particular, very complex organization. Such organization was once found only in biological matter, but is now slowly appearing in our most complex machines.

In short, given enough computing power, which gets easier every year as the computational abilities of chips increase exponentially, it’s possible to build a robotic creature that crosses some critical threshold in the number of low-level, organizational interactions it is able to sustain. Out of which will emerge—like Venus surfacing from the sea on a half shell—sentience.

There is a large flaw in this reasoning, however. Machines are indeed getting more and more powerful, as predicted by former Intel chairman Gordon Moore in what is known as Moore’s Law. But computers are not just chips; they also need the instructions that tell the chips what to do; that is, the software. And there is no Moore’s Law for software. Quite the contrary: as systems increase in complexity, it generally becomes harder to write reliable code.

The thinking of today’s roboticists, like that of their predecessors in early AI efforts, is infected by their vision of the computer itself, the machine as model human. Again, they mistake the tool for its builder. In particular, the error comes from mistaking the current methods of software writing as a paradigm for human mental organization. In the 1970s a computer program was a centralized, monolithic thing, a small world unto itself, a set of instructions operating upon a set of data. It should be no surprise, then, that researchers at the time saw human intelligence as a .?.?. centralized, monolithic, logical mind operating upon the data in a “knowledge base.” By the 1990s that monolithic paradigm of programming had been replaced by something called “object oriented” methods, in which code was written in discrete, atomic chunks that could be combined in a variety of ways. And—what do you know?—human sentience is now seen as something emerging from the complex interaction of .?.?. discrete, atomic chunks. Is cognitive science driving the science of computing, or is it the other way around?

And there is a more fundamental problem in using today’s software methods as a paradigm for the emergence of human sentience: Software presupposes the existence of a designing mind, whereas the scientific view is that human intelligence arose, through evolution, without a conscious plan. A “little man,” a homunculus, lives inside software. To write code, even using “object oriented” methods that seek to work from the bottom up, someone must have an overall conception of what is going on. At some level there is an overriding theory, a plan, a predisposition, a container, a goal. To use the computer as a model, then—to believe that life arises like the workings of a well-programmed computer—is to posit, somewhere, the existence of a god.

A cybernetic rebuttal to this idea of a god would be that the “program” in the natural world—the organizational intelligence—is supplied by Darwinian selection. That is, the human programmer takes over the work of “inefficient programmer”—evolution. But I think you can’t have it both ways: You can’t simultaneously say that you can program a “sentient” robot, freed from the pressures of survival and reproduction, and that the equivalent of programming in the natural world is natural selection, which is predicated on the pressures of reproduction and survival. You may say you are building something—a mechanical object that simulates some aspects of human sentience, for instance—but you cannot say that the organizational principles of that mechanical object illuminate the real bases of human sentience. The processes of engineering, particularly of programming, are not analogous to the processes of the natural world.

For example, the computer storage mechanisms that we call “memory” do not illuminate the workings of human memory. According to current research, the contents of human long-term memory are dynamic. Each time we recall something, it seems, we reevaluate it and reformulate it in light of everything relevant that has happened since we last thought about it. What we then “remember,” it turns out, is not the original event itself but some endless variant, ever changing in the light of experience. If a computer’s memory functioned that way, we would call it “broken.” We rely upon machine memory not to change; it is useful because it is not like us.

There are signs that even the cyberneticists who promoted the concept of the bottom-up emergence as a paradigm for human sentience are sensing its limits. Christopher Langton, a key figure in Alife research, has admitted that there is the problem of “finding the automata”; that is, in deciding what indeed constitutes the lowest-level interactions that must be simulated in order to create life. How deep must one go? To interactions between cells? molecules? atoms? elementary particles of matter? Talking with me at a cafe in Linz, Austria, Langton looked up from a scribbled notebook and said with sincere worry: “Where’s the bottom of physics?” Meanwhile, the roboticist Rodney Brooks was wondering about the “top” of the problem, the higher-level cognitive functions that the theory of emergence seeks to portray as an effect of an organism’s organization. Sentience, after all, entails conscious action, intention, what we call “free will.” After years spent creating robots that were like insects, Brooks recognized that something else is involved in the grand project of intelligence. He is now revisiting the problem that stumped AI researchers in the 1970s: finding a way to give the cybernetic creature some internal representation of the world. “We’re trying,” said Brooks, “to introduce a theory of mind.”

The bottom of physics, a theory of mind. Here we go again. Back we are drawn into the metaphysical thickets from which engineering empiricism hoped permanently to flee. The hope was to turn sentience into a problem not of philosophy, or even of science, but of engineering. “You don’t have to understand thought to make a mind,” said Douglas Hofstadter, wishfully, while introducing the spiritual-robot panel at Stanford. “The definition of life is hard,” Rodney Brooks said to me. “You could spend five hundred years thinking about it or spend a few years doing it.” And here is the underlying motive of robotics: an anti-intellectualism in search of the intellect, a flight from introspection, the desire to banish the horrid muddle of this “thinking about it,” thousands of years of philosophical speculation about what animates us without notable progress. “You can understand humans either by reverse engineering or through building,” says Cynthia Breazeal, a young roboticist at MIT. In other words, don’t think about it, build it; equate programming with knowledge. Yet still we circle back to the old confusions, for conceptualization is as deep in our human nature as tool-building, Homo faber wrestling with Homo sapiens.

One way to get around the difficulties of human sentience is to declare humans all but irrelevant to the definition of life. This is the approach taken by Alife researchers, who see human beings, indeed all life on Earth, as “accidents,” part of the “highly accidental set of entities that nature happened to leave around for us to study.” As Christopher Langton writes in his introduction to Artificial Life: An Overview, “The set of biological entities provided to us by nature, broad and diverse as it is, is dominated by accident and historical contingency. .?.?. We sense that the evolutionary trajectory that did in fact occur on earth is just one out of a vast ensemble of possible evolutionary trajectories.”

Based on the same foundations as modern robotics—emergence theory—Alife’s goal is the creation of software programs that exhibit the properties of being alive, what is called “synthetic biology,” the idea being that researchers can learn more about life “in principle” if they free themselves of the specific conditions that gave rise to it on Earth. Alife research says farewell to the entire natural world, the what-must-be, in Herbert Simon’s formulation, with barely a backward glance (except to occasionally cite the example of ants).

“Life” in the context of Alife is defined very simply and abstractly. Here is one typical approach: “My private list [of the properties of life] contains only two items: self-replication and open-ended evolution.” And another: “Life must have something to do with functional properties .?.?. we call adaptive, even though we don’t yet know what those are.” Bruce Blumberg, an MIT researcher who creates robotic dogs in software animations, describes the stance of Alife this way: “Work has been done without reference to the world. It’s hard to get students to look at phenomena. It’s artificial life, but people aren’t looking at life.”

What Alife researchers create are computer programs—not robots, not machines, only software. The cybernetic creatures in these programs (“agents” or “automata”) go on to “reproduce” and “adapt,” and are therefore considered in principle to be as alive as we are. So does the image of the computer as human paradigm, begun in the 1950s, come to its logical extreme: pure software, unsullied by exigencies of carbon atoms, bodies, fuel, gravity, heat, or any other messy concern of either soft-tissued or metal-bodied creatures. Again the image of the computer is conflated with the idea of being alive, until only the computer remains: life that exists only in the machine.

What these views of human sentience have in common, and why they fail to describe us, is their disdain for the body: the utter lack of a body in early AI and in later formulations like Kurzweil’s (the lonely cortex, scanned and downloaded, a brain in a jar); and the disregard for this body, this mammalian flesh, in robotics and Alife. Early researchers were straightforward about discarding the flesh. Marvin Minsky pronounced us to be “meat machines.” “Instead of trying to consider the ‘whole person,’ fully equipped with glands and viscera,” said Herbert Simon, “I should like to limit the discussion to Homo sapiens, ‘thinking person.’?’’ Meat and glands and viscera—you can sense the corruption implied here, the body as butchery fodder, polluting the discussion of intelligence.

This suspicion of the flesh, this quest for a disembodied intelligence, persists today. Ray Kurzweil brushes aside the physical life as irrelevant to the project of building “spiritual” beings: “Mammalian neurons are marvelous creations, but we wouldn’t build them the same way. Much of their complexity is devoted to supporting their own life processes, not to their information-handling abilities.” In his view, “life” and “information handling” are not synonymous; indeed “life” gets in the way. He sees evolution as “inefficient,” “a sloppy programmer,” producing DNA that is mostly “useless.” And Alife researchers, seeing “life” in their computer programs, pay no attention at all to the body, imagining that the properties of life can somehow, like tissue specimens, be cut away from the dross of living:

Whether we consider a system living because it exhibits some property that is unique to life amounts to a semantic issue. What is more important is that we recognize that it is possible to create disembodied but genuine instances of specific properties of life in artificial systems. This capability is a powerful research tool. By separating the property of life that we choose to study from the many other complexities of natural living systems, we make it easier to manipulate and observe the property of interest.

One might think that robotics, having as it does the imperative of creating some sort of physical container for intelligence, would have more regard for the human body. But the entire project of robotics—the engineering of intelligent machines—is predicated on the belief that sentience is separable from its original substrate. I had a talk with Cynthia Breazeal, who was a student of Rodney Brooks and is now on the faculty of the MIT Media Lab. Breazeal is a thoughtful researcher. Her work involves the creation of robots that respond to human beings with simulated emotional reactions, and she shows a sincere regard for the emotional life. Yet even she revealed an underlying disgust for the body. Growing impatient with me as I pressed her for a definition of “alive,” she said: “Do you have to go to the bathroom and eat to be alive?”

The question stayed with me—do you have to go to the bathroom and eat to be alive?—because Breazeal’s obvious intent was to pick what she considered the most base part of life, to make it seem ridiculous, humiliating even. But after a while I came to the conclusion: maybe yes. Given the amount of time living creatures devote to food and its attendant states—food! the stuff that sustains us—I decided that, yes, there might be something crucial about the necessities of eating and eliminating that defines us. How much of our state of being is dependent upon being hungry, eating, having eaten, being full, shitting. Hunger! Our word for everything from nourishment to passionate desire. Satisfied! Meaning everything from well-fed to sexually fulfilled to mentally soothed. Shit! Our word for human waste and our expletive of impatience. The more I thought about it, the more I decided that there are huge swaths of existence that would be impenetrable- indescribable, unprogrammable, utterly unable to be represented—to a creature that did not eat or shit.

In this sense, artificial-life researchers are as body-loathing as any medieval theologian. They seek to separate the “principles” of life and sentience—the spirit—from the dirty muck it sprang from. As Breazeal puts it, they envision a “set of animate qualities that have nothing to do with reproduction and going to the bathroom,” as if these messy experiences of alimentation and birth, these deepest biological imperatives—stay alive, eat, create others who will stay alive—were not the foundation, indeed the source, of intelligence; as if intelligence were not simply one of the many strategies that evolved to serve the striving for life. If sentience doesn’t come from the body’s desire to live (and not just from any physical body, from this body’s striving, this particular one), where else would it come from? To believe that sentience can arise from anywhere else—machines, software, things with no fear of death—is to believe, ipso facto, in the separability of mind and matter, flesh and spirit, body and soul.

Here is what I think: Sentience is the crest of the body, not its crown. It is integral to the substrate from which it arose, not something that can be taken off and placed elsewhere. We drag along inside us the brains of reptiles, the tails of tadpoles, the DNA of fungi and mice; our cells are permuted paramecia; our salty blood is what’s left of our birth in the sea. Genetically, we are barely more than roundworms. Evolution, that sloppy programmer, has seen fit to create us as a wild amalgam of everything that came before us: the whole history of life on Earth lives on, written in our bodies. And who is to say which piece of this history can be excised, separated, deemed “useless” as an essential part of our nature and being?

The body is even the source of abstract reasoning, usually thought of as the very opposite of the flesh, according to the linguists George Lakoff and Mark Johnson. “This is not just the innocuous and obvious claim that we need a body to reason,” they write, “rather, it is the striking claim that the very structure of reason itself comes from the details of our embodiment.” If we were made out of some other “details”—say, wire and silicon instead of sinew and bone—we might indeed have something called logic, but we would not necessarily recognize it as anything intelligent. It is this body, this particular fleshly form, that gave birth to the thing we call intelligence. And what I mean by this particular form is not just that of human beings or even primates. It is our existence as mammals. Oddly, in all the views of human intelligence promulgated by cybernetics, this is the one rarely heralded: what we call sentience is a product of mammalian life.

Mammalian life is social and relational. What defines the mammalian class, physiologically, is not dependence on the female mammary gland or egg-laying but the possession of a portion of the brain known as the limbic system, which allows us to do what other animals cannot: read the interior states of others of our kind. To survive, we need to know our own inner state and those of others, quickly, at a glance, deeply. This is the “something” we see in the eyes of another mammalian creature: the ability to look at the other and know that he or she has feelings, states, desires, that are different from our own; the ability to see the other creature looking back at us, both of us knowing we are separate beings who nonetheless communicate. This is what people mean when they say they communicate with their dogs or cats, horses or bunnies: mammals reading each other. We don’t go looking for this in ants or fish or reptiles; indeed, when we want to say that someone lacks that essential spark of life, we call him “reptilian.” What we mean by this is that he lacks emotions, the ability to relay and read the emotions of others; that he is, in short, robotic.

If sentience is a mammalian trait, and what distinguishes mammals is the capacity for social life, then sentience must have its root in the capacity for rich social and emotional interchange. That is, sentience begins with social life, with the ability of two creatures to transact their inner states—needs, desires, motivations, fears, threats, contentment, suffering, what we call “the emotions.” Moreover, the more avenues a creature has for understanding and expressing its emotional states, the more intelligent we say it is. Ants were not a good place to look for rich social interchange; the logical inference engines of early AI were a particularly poor choice of model; computer software running in the astringent purity of a machine won’t find it. To get at the heart of intelligence, we should have started by looking at the part of human life ordinarily considered “irrational,” the opposite of “logical,” that perennial problem for computers: emotions.

Some robotics researchers are beginning an investigation into the ways that a mechanical object can have, or appear to have, an emotional and social existence. “Most roboticists couldn’t care less about emotions,” says Cynthia Breazeal, one of the few researchers who does care about emotions. Her “Kismet” robot is a very cute device with the face and ears of a Furby. It has simulated emotional states (expressed adorably, floppy ears drooping pitiously when it’s sad); it is designed to interact with and learn from humans as would a human child. “Social intelligence uses the whole brain,” she says. “It is not devoid of motivation, not devoid of emotion. We’re not cold inference machines. Emotions are critical to our rational thinking.”

Rodney Brooks speaks of adding “an emotional model,” of giving his new robot “an understanding of other people.” Cynthia Breazeal’s Kismet is designed to suffer if it doesn’t get human attention, and to care about its own well-being. Bruce Blumberg, who “does dogs,” as he puts it, understands that “you can’t say you’re modeling dogs without social behavior.” Of the three, however, only Blumberg seems to grasp the size of the problem they’re undertaking, to be willing to admit that there is something ineffable about a living being’s social and emotional existence. “My approach is to build computer devices to catch a spark of what’s really there in the creature,” he says, “to understand what makes dogs—and us—have a sort of a magical quality.” Then, perhaps embarrassed at this recourse to magic, he adds, “Ninety-nine percent of computer scientists would say you’re no computer scientist if you were talking in terms of ‘magical qualities.”

Indeed, his colleague Breazeal has a pragmatic, even cynical, view of the emotions. Robots will need to have something like emotions, she says, because corporations are now investing heavily in robotic research and “emotions become critical for people to interact with robots—or you won’t sell many of them.” The point seems to be to fool humans. About her robot Kismet she says, “We’re trying to play the same game that human infants are playing. They learn because they solicit reactions from adults.”

But a human infant’s need for attention is not simply a “game.” There is a true, internal reality that precedes the child’s interchange with an adult, an actual inner state that is being communicated. An infant’s need for a mother’s care is dire, a physical imperative, a question of life or death. It goes beyond the requirement for food; an infant must learn from adults to survive in the world. But without a body at risk, in a creature who cannot die, are the programming routines Breazeal has given Kismet even analogous to human emotions? Can a creature whose flesh can’t hurt feel fear? Can it “suffer”?

Even if we leave aside the question of embodiment, even if we agree to sail away from the philosophical shoals of what it means to really have an emotion as opposed to just appearing to have one, the question remains: How close are these researchers to constructing even a rich simulation of mammalian emotional and social life? Further away than they realize, I think. The more the MIT researchers talk about their work, the longer grows the list of thorny questions they know they will have to address. “Is social behavior simply an elaboration of the individual?” asks Blumberg. “What does the personality really mean?” “We need a model of motivation and desires.” “How much of life is like that—projection?” From Breazeal: “How do you build a system that builds its own mind through experience?” And this great conundrum: “A creature needs a self for social intelligence—what the hell is that?” In turning to the emotions and social life, they have hit right up against what Breazeal calls the “limiting factor: big ideas.” Theories of learning, brain development, the personality, social interaction, motivations, desires, the self—essentially the whole of neurology, physiology, psychology, sociology, anthropology, and just a bit of philosophy. Oh, just that. It all reminded me of the sweet engineering naïveté of Marvin Minsky, back in the early days of AI, when he offhandedly suggested that the field would need to learn something about the nature of common sense. “We need a serious epistemological research effort in this area,” he said, believing it would be accomplished shortly.

Of course the biggest of the “big ideas” is that old bugaboo: consciousness. Difficult, fuzzy, and unwilling to yield up its secrets despite thousands of years devoted to studying it, consciousness is something robotics researchers would rather not discuss. “In our group, we call it the C-word,” says Rodney Brooks.

Consciousness, of course, is a problem for robots. Besides being hard to simulate, the very idea of consciousness implies something unfathomably unique about each individual, a self, that “magical quality” Bruce Blumberg is daring enough to mention. Brooks’s impulse, like that of his former student Cynthia Breazeal, is to view the interior life cynically, as a game, a bunch of foolery designed to elicit a response. Brooks is an urbane and charming man. He speaks with a soft Australian accent and seems genuinely interested in exchanging thoughts about arcane matters of human existence. He sat with me at a small conference table in his office at MIT, where photographs of his insectlike robots hung on the walls, and piled in the corner among some books was the robotic doll called “My Real Baby” he had made for the Hasbro toy company.

I mentioned Breazeal’s Kismet, told him I thought it was designed to play on human emotions. Then I asked him: “Are we just a set of tricks?” He answered immediately. “I think so. I think you’re a bunch of tricks and I’m just a bunch of tricks.”

Trickery is deeply embedded in the fabric of computer science. The test of machine intelligence that Alan Turing proposed in 1950, now known as “the Turing Test,” was all about fooling the human. The idea was this: Have a human being interact with what might be either another human or a computer. Place that first human behind some metaphorical curtain, able to see the text of the responses but unable to see who or what “said” them. If that human being cannot then tell if the responses came from a person or a machine, then the machine could be judged to be intelligent. A circus stunt, if you will. A Wizard of Oz game. A trick. To think otherwise, to think there was something more to intelligence than just the perception of a fooled human being, would be to believe there was some essence, a “something else,” in there. Just then, as I sat in Brooks’s office, I didn’t much feel like a bunch of tricks. I didn’t want to think of myself as what he had described as “just molecules, positions, velocity, physics, properties—and nothing else.” He would say this was my reluctance to give up my “specialness”; he would remind me that it was hard at first for humans to accept that they descended from apes. But I was aware of something else in me protesting this idea of the empty person. It was the same sensation I’d had while at the spiritual-robot symposium hosted by Douglas Hofstadter, an internal round-and-round hum that went, No, no, no, that’s not it, you’re missing something.

I asked Brooks about the purpose of consciousness. “I don’t know,” he answered. “Do you know what consciousness is good for?”

Without hesitation, I told him that, yes, I did know what consciousness is good for. I told him we are born helpless and defenseless. Our only hope to survive is to make contact with other humans. We must learn to tell one individual from another, make alliances, immediately see on the face of another human being whether this is friend or foe, kin or stranger. I told him that I think human existence as a species is predicated on this web of social interactions, and for this we must learn to identify individuals. And out of that, the recognition of the identity of others, comes our own identity, the sense that we exist, ourselves, our self. Everything we call consciousness unwinds from that. “It’s not mystical,” I told him. “It’s an evolutionary imperative, a matter of life and death.”

Brooks put his chin on his hand and stared at me for a moment. Then he said: “Huh. None of our robots can recognize their own kind.”

It took me a while, but after thinking about Rodney Brooks’s remark about robots and their own kind, my round-and-round humming anxiety—that voice in me that kept protesting, No, no, you’re missing something—finally stopped. For there it was, the answer I was looking for, the missing something else: recognition of our own kind.

This is the “magical quality”—mutual recognition, the moment when two creatures recognize each other from among all others. This is what we call “presence” in another creature: the fact that it knows us, and knows we know it in turn. If that other being were just a trick, just the product of a set of mechanisms, you would think that snakes could make this recognition, or paramecia, or lizards, or fish. Their bodies are full of marvelous mechanisms, reflexes, sensors, to give them an awareness of the world around them. Ant pheromones should work. Robots with transponders beaming out their serial numbers should do the job. But we are, as Cynthia Breazeal said, creatures whose brains are formed by learning; that is, through experience and social interaction. We don’t merely send out signals to identify ourselves; we create one another’s identity.

It is true that the idea of the human being as a unity is not an entirely accurate concept. Most of our intelligence is unconscious, not available for introspection, having an independent existence, so to speak. And the body itself is not a unity, being instead a complicated colony of cells and symbiotic creatures. We can’t live without bacteria in our gut; tiny creatures live on our skin and eyelids; viruses have incorporated themselves into our cells. We’re walking zoos. Yet somehow, for our own survival (and pleasure) it is critical that we attain a unified view of ourselves as unique selves.

But I don’t think this idea of being a unique self is just some chauvinistic sense of specialness, some ego problem we have to let go of. Nature has gone to a great deal of trouble to make her creatures distinct from one another. The chromosomes purposely mix themselves up in the reproductive cells. Through the wonder of natural DNA recombination, nearly every human being on Earth is distinct from every other. This recombining of the genetic material is usually thought of as creating diversity, but the corollary effect is the creation of uniqueness. Twins fascinate us for this reason: because they are rare, the only humans on Earth without their own faces. We’re born distinct and, as our brains develop in the light of experience, we grow ever more different from one another. Mammalian life takes advantage of this fact, basing our survival on our ability to tell one from another, on forming societies based on those mutual recognitions. Uniqueness, individuality, specialness, is inherent to our strategy for living. It’s not just a trick: there really is someone different in there.

AI researchers who are looking at social life are certainly on the right path in the search to understand sentience. But until they grasp the centrality of identity, I don’t think they’ll find what they’re looking for. And then, of course, even supposing they grant that there is something called an identity, a unique constellation of body and experience that somehow makes a creature a someone, a self—even then, they’ll still have to find a way to program it.

Their task in simulating a self-identifying sentient creature will be a little like trying to simulate a hurricane. Think about how weather simulations work. Unable to take into account all of the complexity that goes into the production of weather (the whole world, essentially), simulations use some subset of that complexity and are able to do a fairly good job of predicting what will happen in the next hours or days. But as you move out in time, or at the extremes of weather, the model breaks down. After three days, the predictions begin to fail; after ten, the simulation no longer works at all. The fiercer the storm, the less useful the simulation. Hurricanes are not something you predict; they’re something you watch. And that is what human sentience is: a hurricane—too complex to understand fully by rational means, something we observe, marvel at, fear. In the end we give up and call it an “act of God.”

, a former software engineer, is the author of<em> Close to the Machine: Technophilia and Its Discontents </em>and the novels<em> The Bug </em>and<em> By Blood.</em>

More from

| View All Issues |

August 1998

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug