Article — From the October 2002 issue

Programming the Post-Human

Computer science redefines “life”

( 2 of 10 )

This was not the first time computer science thought it was on the verge of creating a successor race of machines. When I was a young programmer in the late 1970s and early 1980s, a branch of computer science then called “artificial intelligence” believed that it was close to creating an intelligent computer. Although AI would fail spectacularly in fulfilling its grand expectations, the debate surrounding the field was alluring. Like many at the time, I saw in AI the opportunity to explore questions that had previously been in the province of the humanities. What are we? What makes a human intelligent? What is consciousness, knowledge, learning? How can these things be represented to a machine, and what would we learn about ourselves in the formation of that representation? It was clear that as members of a secular society that has given up on the idea of God we would be looking elsewhere for the source of what animates us, and that “elsewhere” would be the study of cybernetic intelligence, the engine of postmodern philosophical speculation.

It is for this reason that the question of the post-human is worth exploring. Whether or not we can build a “spiritual robot” by 2100, in asking what is “post” human, we must first ask what is human. The ensuing debate inherits the questions that once belonged almost exclusively to philosophy and religion—and it inherits the same ancient, deep-seated confusions.

Over the years, as I listened to the engineering give-and-take over the question of artificial life-forms, I kept coming up against something obdurate inside myself, some stubborn resistance to the definition of “life” that was being promulgated. It seemed to me too reductive of what we are, too mechanistic. Even if I could not quite get myself to believe in God or the soul or the Tao or some other metaphor for the ineffable spark of life, still, as I sat there high in the balcony of the Stanford lecture hall, listening to the cyberneticists’ claims to be on the path toward the creation of a sentient being, I found myself muttering, No, that’s not right, we’re not just mechanisms, you’re missing something, there’s something else, something more. But then I had to ask myself: What else could there be?

Download Pdf
Share

More from Ellen Ullman:

Readings From the May 2000 issue

The museum of me

Get access to 164 years of
Harper’s for only $34.99

United States Canada

  • http://twitter.com/marcofeiten78 Marco Feiten

    Great article! Thanks! Quoted you on beyondingularity.com. Just saw “iRobot” yesterday on TV and all your thoughts fit perfectly. You’re wright: a robot has to feel fear of his existence. And for that it has to be conscious and to remember things, that are important for it. A baby has no fear too. That starts at the age of 2-3 I think (just my experience with my children). So maybe a robot with some kind of artificial intelligence has to “live” some time to achieve a similar state. That means, we will not be able to build robots with consciousness directly – we just can try to implement a design that maybe sets the basis for development. So we are back to evolution: trial and error. Question being, if we can afford such a process with open end… Best regards from Germany, Marco

    • fff

      Grammar. * right

THE CURRENT ISSUE

May 2014

50,000 Life Coaches Can’t Be Wrong

= Subscribers only.
Sign in here.
Subscribe here.

The Quinoa Quarrel

= Subscribers only.
Sign in here.
Subscribe here.

You Had to Be There

= Subscribers only.
Sign in here.
Subscribe here.

A Study in Sherlock

= Subscribers only.
Sign in here.
Subscribe here.

view Table Content