Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access
Adjust

We live, we are told, in distracted times. The internet has destroyed our ability to concentrate, condemning us to a future of agitated doomscrolling. Our alienated children stare at their screens, poking at buttons like lab rats, barely able to comprehend the polychrome sewage flashing before their red-rimmed eyes. Our attention, once a tool of exquisite interiority, has been shattered by the excessive stimulation of the modern world, leaving us prey to manias and phantoms. We’re like disaster victims, burned out and disoriented. “We are on the verge of losing our capacity as a society for deep, sustained focus,” as one typical commentator has put it. “We are slipping toward a new dark age.”

This crisis of attention seems like a profoundly contemporary malaise, and questions about the quality of our attention and its objects—how deeply, to whom, and to what we direct it—carry political and economic weight. Almost 10 percent of American children have been given a diagnosis of ADHD. Companies worry that productivity is suffering because employees spend all their time on social media, and this concern is used to justify intensifying workplace surveillance. The public sphere is fractured by conspiracy theories; political agency has been drained away by the distractions of misinformation. In a 2015 talk, the commentator Nicholas Carr exhorted his audience to “smash” their smartphones. Elsewhere, he has described the internet as “not just technological progress but a form of human regress,” chipping away at the hard-won attentiveness that he sees as characteristic of civilization and returning to a prehistoric state of anxious hypervigilance. If we’re constantly waiting for predators to pounce, we’ll never get anything done.

But what is attention? Is it an active faculty, the power to direct the mind toward something external? Or is it an effect of particular circumstances—a quiet room, the absence of troubling thoughts? When street noise makes it hard to write, I put on headphones and play William Basinski’s Watermusic. The imperceptible shifts of sound and sense of enclosure induce the mental state I know as concentration, the “deep attention” that so many observers believe is disappearing from our culture. If distraction is the diversion or diffusion or dilution of attention, this is its opposite: directedness, intensification, fixation on a task. It doesn’t feel like something I will so much as something I induce.

To be deeply attentive to one thing is, of course, to let others fall away. William James offered a very civilized formulation, writing that “my experience is what I agree to attend to”—as if it were some sort of mental cotillion, an event to which he had received a letterpress invitation on a pleasingly thick card. How far we can be said to experience things to which we aren’t attentive is an ancient preoccupation. In On the Nature of Things, Lucretius warns that because of the profusion of images, “the mind cannot perceive / any except those it strives to see.” If I’m not actively aware of something, is it still having an effect somewhere in my brain, or does it simply not exist for me at all? The idea of the unconscious, motor of so much twentieth-century culture, is partly a response to experiences at the ragged edges of attention.

In a now famous experiment by the psychologists Daniel Simons and Christopher Chabris, subjects were shown a short video in which two teams of people, one in white shirts, the other in black, passed basketballs. The viewers were asked to keep a silent count of the number of passes made by the white team, and at least half of them didn’t notice when a figure in a gorilla suit entered the frame. The counting task demands hypervigilance in relation to events in the whiter part of the visual field, which in turn induces hypovigilance in relation to the darker parts. So-called inattentional blindness suggests that attention could be a way for the brain to allocate scarce processing power. In 1958, as telephony exploded and engineers adapted new ways to handle growing call volumes through fixed-capacity lines, the psychologist Donald Broadbent proposed that what he called “selective perception” represented just this kind of problem. If one system was taking in a lot of information, and a second, smaller one had the work of sorting it, then attention might be a mechanism to handle traffic flow at the bottleneck. This perspective became very influential, and even as telephone systems have given way to computers and the internet as the dominant source of technical analogies, the idea has persisted that attention has something to do with managing the brain’s finite resources.

An important theory developed by the psychologist Anne Treisman expands on this. Sense data—shapes, colors, movements, sounds—enters the brain in parallel, partly encoded in modular systems, so that one area deals with the color of the apple and a completely different area deals with its shape, and so on. This gives rise to the so-called binding problem: How does the brain bundle the redness and the roundness and assign both traits to the same object? Is attention a window or perhaps a spotlight that moves around, illuminating spatial areas and allowing the brain to perform this work? Still other theories relate attention to our need to act rather than just perceive, to blot some things out so we don’t become overwhelmed by navigating the storm of sensory signals in which we exist.

Though it’s now possible to measure and describe brain states that correspond to the experience of attentional effects, the exact function of attention is so slippery that some researchers have started to wonder whether it’s even a meaningful cognitive category. In a paper bluntly titled “There Is No Such Thing as Attention,” the psychologist Britt Anderson dismisses the concept as “a sort of mental phlogiston,” concluding that the word could be deleted from most psychological discussions without losing much. This position was entertained by none other than William James. “Attention,” he wrote, “may have to go, like many a faculty once deemed essential, like many a verbal phantom, like many an idol of the tribe. It may be an excrescence on Psychology.”

If attention doesn’t exist, or is at least a mental state or experience that is not clearly understood, what exactly is in crisis? The pessimistic narrative of a fall from focus into postcivilizational distraction is not a contemporary invention. Eighty-five years ago, T. S. Eliot, ever ready to strike a mournful pose, looked around at the “strained time-ridden faces” of his fellow passengers on the London Underground and pronounced them “distracted from distraction by distraction / Filled with fancies and empty of meaning / Tumid apathy with no concentration.” After the advent of radio and cinema, people began distinguishing between high and low culture on the grounds of the quality of attention each attracted. As Walter Benjamin put it in an essay published the year before Eliot’s “Burnt Norton,” “the masses are criticized for seeking distraction in the work of art, whereas the art lover supposedly approaches it with concentration.”

As a Marxist, Benjamin was alert to the political implications of patrician disdain, and suggested that what he called “reception in distraction” might actually help in understanding the kaleidoscopic bustle of modern urban life. However, the association of new media with crises of attention goes back much further. In Distraction: Problems of Attention in Eighteenth-Century Literature, the literary scholar Natalie Phillips describes how the proliferation of early print publications changed reading habits. Instead of devoting one’s attention to a small library of precious books, it was now possible to dip into things, to divert oneself with articles in gossipy magazines such as The Tatler and The Spectator, even—horror of horrors—to skim. In the introduction to Alexander Pope’s mock epic The Dunciad, the pseudonymous Martinus Scriblerus (writing from the future) explains that the poet lived at a time when “paper also became so cheap, and printers so numerous, that a deluge of authors covered the land.” The result was information overload. Samuel Johnson complained that readers were so distracted that they “looked into the first pages” before moving on to other options. One of the lasting monuments to this new print culture, Laurence Sterne’s Tristram Shandy, makes a comedy of its narrator’s distraction, which he attributes to his mother having interrupted his father at the crucial moment of conception to ask whether he’d remembered to wind the clock.

Though eighteenth-century thinkers, like their modern counterparts, valued sustained focus, they also felt that distraction was not altogether a bad thing. In 1754, the encyclopedist Denis Diderot wrote that

distraction arises from an excellent quality of the understanding, which allows the ideas to strike against, or reawaken one another. It is the opposite of that stupor of attention, which merely rests on, or recycles, the same idea.

Diderot wryly suggests a possibility that is overlooked in the story of an internet-driven attention crisis. Distraction is certainly bad when driving a car or reading philosophy, but in other contexts, toggling between activities and juxtaposing different registers of information can be fertile and productive. Indeed, it’s key to creativity—at least this is what I (and my ninety-five open browser tabs) will maintain if you ask. It isn’t that the distracted writer is unable to focus on anything at all; his attention is captured, fleetingly, by various things, and whether that’s useful or not depends very much on context.

The literary scholar N. Katherine Hayles calls this mental state “hyper attention,” which she defines as “switching focus rapidly among different tasks, preferring multiple information streams, seeking a high level of stimulation, and having a low tolerance for boredom.” She identifies this disposition with a new generational divide, and argues that it poses a challenge to educators whose courses are designed for “deep attention.” But Diderot and my overloaded browser suggest (if only anecdotally) that we ought to be cautious about invoking grand epistemic shifts, that perhaps switching between modes of attention is just a normal part of our cognitive routine.

Diderot’s valorization of distraction lands uneasily because we also tend to think of attention as a virtue. Attention is the “rarest and purest form of generosity,” as Simone Weil put it. To be inattentive is to forget to call, to fail to notice that someone is upset, to let the baby play with the kitchen knives. Distraction is also, in an older sense, insanity. In King Lear, Gloucester is pained to see Lear’s condition. “Better I were distract,” he says. “So should my thoughts be sever’d from my griefs, / And woes by wrong imaginations lose / The knowledge of themselves.” Attention deficit is thus, at least implicitly, not merely a cognitive deficiency but a moral failing. We don’t medicate our children in such numbers because we want them to do well in school. It’s because we fear that they will become bad people.

Whether or not it’s a virtue, attention is now considered scarce. As digitization has reduced the cost of transmitting information to near zero and increased its volume to near infinity, the business model of many of the world’s largest corporations rests on “capturing eyeballs.” The involuntary consumption of advertising is now such a ubiquitous experience that it can sometimes feel like a tax on the act of perception itself. This model involves a second-order version of Broadbent’s bottleneck: a vast flow of content, a finite population of consumers. At the pinch point, in the role of attentional window or spotlight, are various filters—tools we use to restrict what we see, recommendations and reviews. These filters are in an arms race with ever more sophisticated mechanisms for cajoling us to engage. We squirm under the gaze of the organizations that monitor us, and we dream of going offline, but we have also learned to crave the feeling of being watched. We now live in a world ordered according to what the literary scholar Yves Citton, following Maurice Merleau-Ponty, has termed an “ontology of visibility,” which “measures a being’s level of existence by the quantity and quality of its perception by others.” For those rich in visibility, there are ways to convert attention into material wealth. The invisibles exist just a little less.

This ontology is more fundamental than the frothy influencer economy. We say we “pay attention,” and there is a sense in which the act of looking does confer value on its objects. Google PageRank, possibly the most important algorithm in the world, and certainly the most powerful mechanism yet invented for organizing the world’s attention, weights the value of results by quantifying the number and quality of links to a page. It is an iterative process. What others have found useful rises to the top. The collection and organization of this knowledge on a global scale is something qualitatively new, a network diagram of our collective desires. And it is of course immensely valuable.

So maybe we are experiencing something more consequential than a moral panic about novel experiences of distraction. Is the internet capable of remolding our highly plastic brains in its image? Maybe. Can it reconfigure us for hyperattentive consumption in the new economy? Is it teaching us to open wider so we can cram more in? If we are being changed, I suspect that what we are losing is not so much the ability to focus as the experience of untrammeled interiority. The attention economy is fundamentally extractive. We are the coal and Big Tech is the miner. It has inserted itself into our most intimate circuits, those subtle feedback loops of self-reflection (self-worth, self-consciousness, self-care . . . ) that arise when we direct ourselves inward, experiences in which we seek a standard, a measure of how we’re doing. We shape ourselves through self-reflexivity, but perhaps we are also in a sense pre-shaped, our desires and subjectivities organized according to a grammar that has been given to us by our culture—which increasingly means tech corporations—so that our very experience of ourselves flows through channels already carved by likes and shares. We work “for exposure,” because even if a thousand followers won’t put food on the table, a million might. We express ourselves in machine-readable formats. One day we shall be paid for the work we have done. One day we shall go viral.


More from

| View All Issues |

November 2023

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug