Article — From the October 2002 issue

Programming the Post-Human

Computer science redefines “life”

There are times when you feel you are witnessing the precise moment when science fiction crosses over into science. I don’t mean the advent of gadgetry like voice-recognition software or flat-panel computer monitors, or even the coming of space tourism, but a moment when one of the great, enduring conundrums of our speculative literature suddenly materializes as an actual human situation. Perhaps the greatest of these is the problem of distinguishing humans from robots—what unique element, if any, separates us from machines?—a theme that received its classic treatment by Isaac Asimov in his 1946 story “Evidence.” In the story, a future society has forbidden the use of humanoid robots, since it is feared that the intelligent machines, identical in appearance to humans but with superior powers, will take over the world. A man named Stephen Byerley, who was suspected of being a robot, eventually does just that, becoming the first “World Coordinator.” Many years later, however, his humanity is still in doubt:
 

I stared at her with a sort of horror, “Is that true?”

“All of it,” she said.

“And the great Byerley was simply a robot.”

“Oh, there’s no way of ever finding out. I think he was. But when he decided to die, he had himself atomized, so that there will never be any legal proof. Besides, what difference would it make?”

In January 2002, fifty-six years after the publication of Asimov’s story, a group of computer scientists, cryptographers, mathematicians, and cognitive scientists gathered at “the first workshop on human interactive proofs,” where their goal was the creation of a CAPTCHA, a “Completely Automated Probabilistic Public Turing Test to Tell Computers and Humans Apart.” In Asimov’s story, distinguishing robots from humans was a matter of world-historical importance, a question of human dignity and worth; the problem for the scientists at the workshop was the development of automated methods to prevent software robots, or “bots,” from invading chat rooms and barraging email systems with unwanted “spam” messages. Thus fantasy leaks into everyday life, a grand vision of the future reduced to a pressing, if mundane, commercial problem: how to tell human from machine.

What is interesting about this problem is that it’s one we humans have brought on ourselves. It is not a contest of human versus machine, though it is often presented that way; it is instead an outgrowth of what is most deeply human about us as Homo faber, the toolmaker. We have imagined the existence of robots, and having dreamed them up we feel compelled to build them, and to endow them with as much intelligence as we possibly can. We can’t help it, it seems; it’s in our nature as fashioners of helpful (and dangerous) objects. We can’t resist taking up the dare: Can we create tools that are smarter than we are, tools that cease in crucial ways to be “ours”?

Underlying that dare is a philosophical shift in the scientific view of humanity’s role in the great project of life. Researchers in robotics and artificial life (also known as “Alife,” the branch of computer science that concerns itself with the creation of software exhibiting the properties of life) openly question the “specialness” of human life. Some call life as we know it on Earth merely one of many “possible biologies,” and see our reverence for humanity as something of a prejudice (“human chauvinism”). Personhood has been defined as “a status granted to one another by society, not innately tied to being a carbon-based life form.” According to Rodney Brooks, director of MIT’s artificial-intelligence lab, evolution spelled the end of our uniqueness in relation to other living creatures by defining us as evolved animals; and robotics, in its quest to create a sentient machine, looks forward to ending the idea of our uniqueness in relation to the inanimate world. In what may reflect supreme humility (we are no better than the rocks or the apes) or astounding hubris (we can create life without the participation of either God or the natural forces of evolution), computer science has initiated a debate over the coming of the “posthuman”: a nonbiological, sentient entity.

According to this idea, the post-human’s thoughts would not be limited by the slow speed of our own nervous systems. Unhampered by the messy wet chemistry of carbon-based life, loosed from the random walk of evolution, the post-human can be designed, consciously, to exceed our capabilities. Its memory can be practically limitless. It can have physical strength without bounds. And, freed from the senescence of the cells, it might live forever. If this sounds like Superman (“with powers and abilities far beyond those of mortal man”), consider another of those moments when science fiction passes over into science:

The date was April 1, 2000. The place was a lecture hall on the campus of Stanford University. Douglas Hofstadter, the computer scientist perhaps best known for his book Gödel, Escher, Bach, assembled a panel of roboticists, engineers, computer scientists, and technologists, and asked them to address the question: “Will spiritual robots replace humanity by 2100?”

Despite the date, it was not an April Fools’ joke. Hofstadter began by saying, crankily, that he had “decided to eliminate naysayers” from the panel, making his point with a cartoon of a fish that thinks it is ridiculous that life could exist on dry land (“gribbit, gribbit,” went the sound of a frog). “It is more amazing,” he said, “that life could come from inert matter than from a change of substrate”—more amazing that life could arise from a soup of dead molecules than change its base from carbon to something else; silicon, for example. Hofstadter looked into the future and said, without nostalgia or regret: “I really wonder whether there will be human beings.”

The room was filled to fire-marshal-alarming proportions. People jammed the doors, stood against the walls, sat in the aisles, on the steps in the steep balcony of the lecture hall, leaned dangerously against the balcony rails. The audience, young and old, students and “graybeards” of the computing community of Silicon Valley, sat still and quiet, leaning forward, putting up with the crowding and the heat and Doug Hofstadter’s grouchy refusal to use a microphone. Sitting, as I was, high up in the balcony, the scene reminded me of nothing so much as those paintings of early medical dissections, crowds of men peering down to where the cadaver lay slashed open in the operating theater below. That day at Stanford there was the same sense that some threshold, previously taboo to science, had been crossed. Computer science, which heretofore had served humanity by creating its tools, was now considering another objective altogether: the creation of a nonbiological, “spiritual” being—sentient, intelligent, alive—who could surpass and, perhaps, control us.

Previous PageNext Page
1 of 10
, a former software engineer, is the author of<em> Close to the Machine: Technophilia and Its Discontents </em>and the novels<em> The Bug </em>and<em> By Blood.</em>

More from Ellen Ullman:

Readings From the May 2000 issue

The museum of me

Get access to 164 years of
Harper’s for only $39.99

United States Canada

  • http://twitter.com/marcofeiten78 Marco Feiten

    Great article! Thanks! Quoted you on beyondingularity.com. Just saw “iRobot” yesterday on TV and all your thoughts fit perfectly. You’re wright: a robot has to feel fear of his existence. And for that it has to be conscious and to remember things, that are important for it. A baby has no fear too. That starts at the age of 2-3 I think (just my experience with my children). So maybe a robot with some kind of artificial intelligence has to “live” some time to achieve a similar state. That means, we will not be able to build robots with consciousness directly – we just can try to implement a design that maybe sets the basis for development. So we are back to evolution: trial and error. Question being, if we can afford such a process with open end… Best regards from Germany, Marco

    • fff

      Grammar. * right

THE CURRENT ISSUE

August 2014

The End of Retirement

= Subscribers only.
Sign in here.
Subscribe here.

The Octopus and Its Grandchildren

= Subscribers only.
Sign in here.
Subscribe here.

Francis and the Nuns

= Subscribers only.
Sign in here.
Subscribe here.

Return of the Strongman

= Subscribers only.
Sign in here.
Subscribe here.

view Table Content