Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access

Context Clues

In his attempt to dramatize the birth of artificial intelligence [“The Gods of Logic,” Essay, July], Benjamín Labatut presents half-truths and decontextualized ideas. George Boole’s unification of logic and mathematics was not so unique (Leibniz did something similar two hundred years earlier, with his characteristica universalis), nor was Boolean algebra “ignored” until Claude Shannon “chanced” upon it in the Thirties (it in fact inaugurated the field of symbolic logic, which thinkers like Charles Sanders Peirce, Alfred North Whitehead, and Bertrand Russell developed in the late nineteenth and early twentieth centuries). These omissions serve the story that the computer came from nowhere—like the Logos in the Bible, or the monolith in 2001. And that story is a dangerous one.

Labatut describes the imagination as “that feral territory that will always remain a necessary refuge and counterpoint to rationality.” The thinkers he cites, however, demonstrate that it is precisely the realm of the imagination that is “bound by statistics”—which, far from meaning that it is deterministic, implies that the capacity for creativity depends on the very patterns in which it is embedded. In his groundbreaking 1948 paper on information theory, Shannon cited Finnegans Wake as a limit case of informational density. For Shannon, Joyce’s play with language showed that the amount of information in a text was a function of its departure from an expected pattern. This degree of unpredictability—Shannon called it entropy—could be measured statistically, and its quantification is foundational to machine learning (and indeed all computing). Large language models are fiction engines: they use statistics to turn randomness into information. That’s not magic; it’s context.

What we need from narrative now is help drawing the line that separates imagination and reality, fiction and fact—not blurring it. As computers have recently figured out, the imagination is a powerful thing. The question is whether we will take responsibility for what we invent.

James Duesterberg
New York City

Labatut’s call for us to move away from computer logic and the idol of reason is persuasive, but fiction and imagination are hardly the only forms of thought left to turn to in the age of AI. We should also do our part to prevent the narrow form of logic encoded in neural networks from subsuming other rudiments of logical reasoning within science, philosophy, and the humanities. Taking Boolean logic to be the apotheosis of logical thought, as Labatut does, not only overlooks the symbolic logic that earlier generations of computer scientists thought could be the basis of computational thinking, but also sidelines the dialectical and discursive thinking that has formed the basis for entire philosophical traditions. Humans have the manifest capacity to nimbly and intentionally move among these systems and modes of thought. Can computers constructed to “think” in ones and zeros match that methodological pluralism?

Moreover, modern AI systems like OpenAI’s GPT-4 rely on more than just the binary model that Labatut describes. They run on GPUs made of metals mined under hazardous conditions, and sold and shipped at volatile prices. They swallow the digitized records of humankind past and present, consume water by the gallon, and sap energy from the grid. They are trained by fleets of precariously employed contract workers tasked to judge and correct simulated humanity. Labatut mentions these darker elements, yet the utopian vision and existential dread that he confers on Geoffrey Hinton and his counterparts hover eerily outside any discussion of the labor, power, and capital that enable such contemporary models in the first place. By reminding ourselves of the deeply material bases of artificial intelligence, we can ground the aim of humanizing an increasingly digital world in systemic and social changes, instead of purely mental operations.

Eli Frankel
Brooklyn, N.Y.

Ars Machina

How a poem elicits emotion is more mysterious than how a combination unlocks a safe—yet, as Laurent Dubreuil’s essay [“Metal Machine Music,” July] suggests, our emotional safe is on the verge of being hacked. Dubreuil writes that a Turing test “is always an inquiry into the conceptions humans entertain about themselves.” We think, for instance, that language is complex, but any child can succeed in cracking its code. There is no need to swallow the entire content of the web; the child can detect patterns in language by applying principles like complexity minimization.

Simplicity is one of the few cognitive mechanisms that humans are inherently equipped with, and this may explain why we tend toward rhyme, periodic stress, and alliteration in poetry. Configurations that are unexpectedly simple can be experienced as surprising, and even emotional. The disastrous “cigar” in Dubreuil’s computer-generated ending to Emily Dickinson’s poem has the opposite effect: it appears out of the blue, increasing complexity in the existing context.

Since our cognitive mechanisms inevitably produce regularities that statistics can locate and reproduce, we must prepare for a future nourished with artificial poetry, generative paintings, and synthetic music. But are we prepared to share emotions with entities that cannot feel them?

Jean-Louis Dessalles
Associate Professor of AI and Cognitive Modeling, Institut Polytechnique de Paris

I find much to admire in Dubreuil’s article, but this line about Hollywood is dubious: “if so many of these writers had not already reduced their trade to a series of plot twists and rehashed situations or characters, LLMs would have little appeal.” Screenwriters didn’t turn to recycled plots because they lacked a fresher vision; they were responding to the incentives of their bosses, who prefer proven formulas to more idiosyncratic stories. LLMs are a mere accentuation of preexisting forces that have accelerated the degradation of creative fields, where quantitative metrics are increasingly favored over slipperier judgments of quality.

If LLMs were just a tool to free workers from intellectual drudgery and allow them more time for creative self-development, they might be liberating. But this possibility is stifled by the economic system we live in, where whatever time is saved by automating the creative process will allow not experimentation or careful composition, but layoffs and demands for higher output. Everyone—whether teachers pressed to overvalue standardized test scores, or journalists working in the clickbait mines—can see this. In Labor’s End: How the Promise of Automation Degraded Work, Jason Resnikoff quotes an autoworker: “All automation has meant to us is unemployment and overwork. Both at the same time.

Ben Davis
Brooklyn, N.Y.


| View All Issues |

September 2024

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug