You Talkin’ to Me?, by Meghan O’Gieblyn

Sign in to access Harper’s Magazine

Need to create a login? Want to change your email address or password? Forgot your password?

  1. Sign in to Customer Care using your account number or postal address.
  2. Select Email/Password Information.
  3. Enter your new information and click on Save My Changes.

Locked out of your account? Get help here.

Subscribers can find additional help here.

Not a subscriber? Subscribe today!

Get Access to Print and Digital for $23.99.
Subscribe for Full Access
Get Access to Print and Digital for $23.99.

From “Sentience and Sensibility,” which was published in the September/October issue of The Baffler.

It was all too easy to dismiss the Washington Post story about Blake Lemoine—the Google engineer who claimed this summer that his employer’s chatbot, LaMDA, was sentient—as an instance of clickbait, hype, and moral panic. Its many absurdities appeared contrived to exhaust the attention of a populace hollowed out by years of doomscrolling and news fatigue. As far as the machine learning community was concerned, the story was a distraction. There were, as these experts knew, legitimate issues with language models, and those issues had nothing to do with sentience but stemmed from the fact that the models were entirely unconscious, that they mindlessly parroted the racist, misogynistic, and homophobic language they’d absorbed from the data they’d been fed.

Those who did attempt to engage Lemoine, who saw the story as an opportunity to educate a bewildered public, found themselves explaining in the simplest terms possible why a language model that speaks like a human was not in fact conscious. And it turned out that the most expedient way to do so was to stress that the model “understood” (if it could be said to understand at all) one thing and one thing only: numbers. Because language models only perform math, and because they “consist mainly of instructions to add and multiply enormous tables of numbers together,” as one Google Research employee put it, they were not conscious agents. On this point, the most vocal tech pessimists found themselves toeing the party line at Google, which maintained that Lemoine had fallen prey to “anthropomorphizing today’s conversational models,” and insisted that “there was no evidence that LaMDA was sentient (and lots of evidence against it).”

But anyone capable of transcending the eternal now of the news cycle and recalling the debates of a decade ago might hear echoes in the Lemoine story of quite another dispute about personhood and language. LaMDA is not a single chatbot but a collection of chatbots, and it thus constitutes a kind of corpus mysticum, an entity whose personhood might be said to exist in a purely figurative sense, just as the Church and its members are called the “body of Christ” or—to take a more germane example—just as Google’s parent company, Alphabet Inc., is considered a legal person. Indeed, one of the central delusions of the Supreme Court’s Citizens United decision was not merely that corporations were persons, but that money was speech—that numbers in their grossest iteration could be construed as a form of constitutionally protected expression. And beneath the commentary about LaMDA and AI personhood, there existed more indelible confusions about the difference between aggregates and persons, about the distinction between numbers and language, and even, at times, about what it means to have emotions, complex motivations, and moral agency.

It’s tempting to imagine a future in which our disputes over personhood are viewed by some higher intelligence much as we regard Scholastic debates about the metaphysical constitution of angels. And perhaps that advanced mind will intuit, correctly, that our confusion stemmed from the fact that long-standing definitions had been recently overruled. Justice John Paul Stevens’s insistence that Citizens United marked “a rejection of the common sense of the American people” was prescient in grasping that the decision, far from being a sleight of hand that relied on a technical redefinition of terms like “person” and “speech,” carried deeper ontological consequences. It cannot be entirely coincidental that soon after corporations were granted personhood and the right to speak there emerged a widespread conviction that they were conscious.

In 2012, following an election in which corporate spending reached an all-time high, Whole Foods CEO John Mackey and business professor Raj Sisodia published Conscious Capitalism, a book arguing that corporations could develop altruistic motives. For its part, Google has always paid lip service to morality in stark, Manichean terms, though in its early days few people deciphered any real thought behind it. While the slogan “don’t be evil” first appeared in Google’s 2004 IPO letter, it wasn’t until the Trump years that it ceased being a cynical punch line and began appearing on the placards of employee walkouts—protests over military contracts, ICE collaborations, and sexual harassment—where it was earnestly leveraged as evidence that the corporation had abandoned its values. Some commentators could not help sneering at the naïveté of young people who’d taken corporate pabulum at face value, who had confused taking a job with “signing up for a movement,” as one former Googler told the press. But it was hard to disparage their demands, which amounted to the rather modest insistence that words should have meaning.

One is less inclined to extend such generosity to lawmakers, who, throughout the House disinformation hearings last year, scolded tech giants for doing precisely the kind of things that corporations do—maximizing user engagement, trying to keep people on their services for as long as possible—and enjoined them to be “Good Samaritans” and “stewards” of the public trust. Many representatives appeared to think corporate objectives were synonymous with the political beliefs of their CEOs, an absurdity that reached its zenith when Mark Zuckerberg, Sundar Pichai, and Jack Dorsey were each made to answer whether they “personally” believed in the efficacy of vaccines. If anthropomorphism involves imagining a soul where there are only unconscious calculations, Google seems to have successfully and widely elicited that illusion.

Lemoine was not the first Google employee to claim he’d been fired for raising ethical objections to AI language models. Throughout 2021, tech coverage was awash in stories about Timnit Gebru and Margaret Mitchell, two Google computer scientists who co-authored a paper arguing that algorithms that have been fed the entirety of Reddit and 4chan will, when prompted with words like “women” or “black” or “queer,” spit out stereotypes and hate speech. Google initially approved their paper, “On the Dangers of Stochastic Parrots,” but later asked Gebru to retract it, claiming, Gebru says, that it put undue stress on the technology’s negative potential. Shortly after Gebru demanded a more concrete account of the review process, she was told that Google was “accepting” her resignation. Less than three months later, Mitchell was also given the axe.

The incident might have been an opportunity to reckon with the limits of ethical AI in the private sector, but it became, instead, a familiar tale of censorship and suppression, of tech bros silencing women and people of color—a narrative that Gebru and Mitchell had courted, perhaps knowing which buzzwords would trigger the media algorithm. They called the race for bigger language models “macho,” and Mitchell compared it to anxiety about penis size.

This narrative, however, smoothed over some enduring confusions about Google’s methods of repression, which, whatever their ultimate purpose, do not seem to rely on the familiar gestures of censorship. To hear Lemoine speak about Google’s “very complex” internal structure is to glimpse what the internet might feel like if it were bottled as concentrate. “There are thousands of mailing lists,” he wrote in 2019. “A few of them have as many as thirty or forty thousand employee subscribers . . . several of the biggest ones are functionally unmoderated. Most of the political conflict occurs on those giant free-for-all mega-lists.” Given the public controversy these forums have created, it’s not immediately clear why Google continued to host them.

Even Gebru’s account of her time at Google suggests something more complex than corporate muzzling. Far from being ignored, she recalls that she and her team were “inundated” with requests from co-workers about ethical problems that needed immediate attention, that she was frequently conscripted into meetings and diversity initiatives, that she was constantly called upon to write and speak. “I’ve written a million documents about a million diversity-related things,” she told one interviewer, “about racial literacy and machine-learning, ML fairness initiatives, about retention of women . . . so many documents and so many emails.” And yet somehow this outpouring of words and speech, of protocols and consultation, did not amount to communication in any meaningful sense of the word.

Google itself has long operated under the premise that “there’s no such thing as too much speech.” Its ambition to organize the world’s knowledge is guided by its belief that “more information is better for users,” even as its search index steadily balloons toward the astronomical number to which its name alludes. To understand how Google regards speech, however, one might recall that its co-founder Larry Page once claimed that he and Sergey Brin chose the name Alphabet for Google’s parent company because “it means a collection of letters that represent language” and language “is the core of how we index with Google search.” It would be difficult to imagine a more succinct distillation of how the company regards the nuances of language as jumbles of zeros and ones—its tendency to see search queries, user posts, and the entirety of the internet’s content as so much empty syntax to be compiled into infographics and used for targeted advertising or transmogrified into algorithmic training data—words liquified into pure capital.

Contemporary politics, enmeshed as it is in Orwellian cosplay and First Amendment panic, has been slow to realize that institutions no longer have to oppress by restricting speech—that in an age of data extraction, when human expression is a lucrative form of biofuel, it is, on the contrary, in the interest of these platforms to enjoin us at every turn to share, to post, to speak up. If Google has secured its dominance through political back channels that regard money as speech, it has similarly profited from the public’s tendency to forget that the primary value of speech for any company that trades in data is not qualitative but quantitative. It matters very little whether the language Google subsumes is on the right or the left, whether it is affirming or protesting systems of power. Lemoine’s attempt to hack the media cycle only ended up creating a glut of “content” that will probably create more value for advertisers, and for Google itself, than it will for the public good. The notion that capitalism metabolizes dissent is no longer theoretical but embodied in the architecture of its most profitable corporate technologies. To happen across Gilles Deleuze’s claim, now some four decades old, that “repressive forces don’t stop people from expressing themselves, but rather, force them to express themselves,” is to wonder what on earth he was speaking of if not the internet.

Justice Stevens concluded in his objection to Citizens United that the notion that “there is no such thing as too much speech” maintains “little grounding in evidence or experience.” Such a premise might be sound, he said, if “individuals in our society had infinite free time to listen to and contemplate every last bit of speech uttered by anyone, anywhere.” In truth, corporate funds had the potential to flood the airwaves so as to “decrease the average listener’s exposure to relevant viewpoints,” and “diminish citizens’ willingness and capacity to participate in the democratic process.”

This conclusion is not limited to political advertisements but encapsulates how corporations like Google profit from an oversaturated “marketplace of ideas,” particularly when they control 92 percent of the market share to its access. While Google undoubtedly benefits indirectly from the diminished public engagement needed to hold it accountable, it is also explicitly cashing in on information fatigue. LaMDA is a response to the fact that a deluge of search results “induces a rather significant cognitive burden on the user,” as a 2021 Google Research paper once put it. What users want, the paper affirms, is not information but a “domain expert” who can save them from the information glut and replace the cacophonous chatter of the web with a single authoritative voice—Google’s.


More from