Article — From the October 2002 issue
SIGN IN to access Harper’s Magazine
1. Sign in to Customer Care using your account number or postal address.
2. Select Email/Password Information.
3. Enter your new information and click on Save My Changes.
Subscribers can find additional help here. Not a subscriber? Subscribe today!
Article — From the October 2002 issue
I stared at her with a sort of horror, “Is that true?”
“All of it,” she said.
“And the great Byerley was simply a robot.”
“Oh, there’s no way of ever finding out. I think he was. But when he decided to die, he had himself atomized, so that there will never be any legal proof. Besides, what difference would it make?”
In January 2002, fifty-six years after the publication of Asimov’s story, a group of computer scientists, cryptographers, mathematicians, and cognitive scientists gathered at “the first workshop on human interactive proofs,” where their goal was the creation of a CAPTCHA, a “Completely Automated Probabilistic Public Turing Test to Tell Computers and Humans Apart.” In Asimov’s story, distinguishing robots from humans was a matter of world-historical importance, a question of human dignity and worth; the problem for the scientists at the workshop was the development of automated methods to prevent software robots, or “bots,” from invading chat rooms and barraging email systems with unwanted “spam” messages. Thus fantasy leaks into everyday life, a grand vision of the future reduced to a pressing, if mundane, commercial problem: how to tell human from machine.
What is interesting about this problem is that it’s one we humans have brought on ourselves. It is not a contest of human versus machine, though it is often presented that way; it is instead an outgrowth of what is most deeply human about us as Homo faber, the toolmaker. We have imagined the existence of robots, and having dreamed them up we feel compelled to build them, and to endow them with as much intelligence as we possibly can. We can’t help it, it seems; it’s in our nature as fashioners of helpful (and dangerous) objects. We can’t resist taking up the dare: Can we create tools that are smarter than we are, tools that cease in crucial ways to be “ours”?
Underlying that dare is a philosophical shift in the scientific view of humanity’s role in the great project of life. Researchers in robotics and artificial life (also known as “Alife,” the branch of computer science that concerns itself with the creation of software exhibiting the properties of life) openly question the “specialness” of human life. Some call life as we know it on Earth merely one of many “possible biologies,” and see our reverence for humanity as something of a prejudice (“human chauvinism”). Personhood has been defined as “a status granted to one another by society, not innately tied to being a carbon-based life form.” According to Rodney Brooks, director of MIT’s artificial-intelligence lab, evolution spelled the end of our uniqueness in relation to other living creatures by defining us as evolved animals; and robotics, in its quest to create a sentient machine, looks forward to ending the idea of our uniqueness in relation to the inanimate world. In what may reflect supreme humility (we are no better than the rocks or the apes) or astounding hubris (we can create life without the participation of either God or the natural forces of evolution), computer science has initiated a debate over the coming of the “posthuman”: a nonbiological, sentient entity.
According to this idea, the post-human’s thoughts would not be limited by the slow speed of our own nervous systems. Unhampered by the messy wet chemistry of carbon-based life, loosed from the random walk of evolution, the post-human can be designed, consciously, to exceed our capabilities. Its memory can be practically limitless. It can have physical strength without bounds. And, freed from the senescence of the cells, it might live forever. If this sounds like Superman (“with powers and abilities far beyond those of mortal man”), consider another of those moments when science fiction passes over into science:
The date was April 1, 2000. The place was a lecture hall on the campus of Stanford University. Douglas Hofstadter, the computer scientist perhaps best known for his book Gödel, Escher, Bach, assembled a panel of roboticists, engineers, computer scientists, and technologists, and asked them to address the question: “Will spiritual robots replace humanity by 2100?”
Despite the date, it was not an April Fools’ joke. Hofstadter began by saying, crankily, that he had “decided to eliminate naysayers” from the panel, making his point with a cartoon of a fish that thinks it is ridiculous that life could exist on dry land (“gribbit, gribbit,” went the sound of a frog). “It is more amazing,” he said, “that life could come from inert matter than from a change of substrate”—more amazing that life could arise from a soup of dead molecules than change its base from carbon to something else; silicon, for example. Hofstadter looked into the future and said, without nostalgia or regret: “I really wonder whether there will be human beings.”
The room was filled to fire-marshal-alarming proportions. People jammed the doors, stood against the walls, sat in the aisles, on the steps in the steep balcony of the lecture hall, leaned dangerously against the balcony rails. The audience, young and old, students and “graybeards” of the computing community of Silicon Valley, sat still and quiet, leaning forward, putting up with the crowding and the heat and Doug Hofstadter’s grouchy refusal to use a microphone. Sitting, as I was, high up in the balcony, the scene reminded me of nothing so much as those paintings of early medical dissections, crowds of men peering down to where the cadaver lay slashed open in the operating theater below. That day at Stanford there was the same sense that some threshold, previously taboo to science, had been crossed. Computer science, which heretofore had served humanity by creating its tools, was now considering another objective altogether: the creation of a nonbiological, “spiritual” being—sentient, intelligent, alive—who could surpass and, perhaps, control us.
More from Ellen Ullman: