Get Access to Print and Digital for $23.99.
Subscribe for Full Access

One of the few clear goods to have emerged from the social and political turmoil of the past decade is a collapse of faith in public opinion polls. Last fall’s off-year elections were not as disastrous for the industry as the three presidential contests that preceded them, but they still represented another blow to pollster prestige. The former governor and Democratic superhack Terry McAuliffe lost a Virginia gubernatorial race in which most outlets had shown him narrowly but consistently ahead, while in solidly blue New Jersey the Democratic incumbent, Governor Phil Murphy, barely squeaked out a win after many had predicted a huge margin of victory.

The latter race led the director of Jersey’s own Monmouth University Polling Institute—one of the nation’s most widely respected surveyors—to apologize. “The growing perception that polling is broken cannot be easily dismissed,” Patrick Murray wrote in an op-ed for the Newark Star-Ledger. Similarly searching statements were heard after the 2012, 2016, and 2020 elections, but they were generally followed by a promise to fix the problem. Murray was more dramatic: “If we cannot be certain that these polling misses are anomalies,” he wrote, “then we have a responsibility to consider whether releasing horse race numbers in close proximity to an election is making a positive or negative contribution to the political discourse.”

This is an admirable gesture toward self-critique, but it assumes that even accurate polling makes a positive contribution to political discourse. After all, the negative effects for which Murray apologized—the suppression of turnout, the fomenting of complacency—might just as easily have been caused by numbers that held up in the end.

Nonetheless, Murray deserves credit for considering the possibility that we “rethink the value” of political polling. Granted, he’s not suggesting that Monmouth get out of the business entirely, only that it give up the horse race to concentrate on public-interest, or “issues,” surveys, a move that the biggest names (Gallup and Pew Research Center) have already made. But it’s not at all clear that the growing skepticism about this work can be confined to the matter of elections. It’s time to consider a world without polls.

The ubiquity of opinion polling is one of those features of modern life that seem natural and even inevitable until we examine them closely, at which point they prove to be frankly bizarre. What would be lost if we no longer knew (or pretended to know) how many of our fellow citizens “approved” of the president on any given day? I’m not entirely sure on any given day whether I approve of the president, and it hardly seems worth my time to figure it out.

One defense of polling is that it makes elected officials more responsive to citizens by communicating the will of the people. But this is actually a fairly novel view of things. It was not so long ago that Bill Clinton was disparaged for fecklessly “governing by poll.” When his infamous strategist Dick Morris admitted that polling was used to determine which issues to prioritize, a minor political scandal ensued. Horrified commentators treated it as a cynical abdication of duty. Only someone with no principles would take the public’s temperature before making a policy decision.

Now the situation is roughly reversed. A politician who ignores the public sentiment made manifest in polls is thought to be in thrall to special interests or partisanship or ideology. But does anyone think that our government has become more responsive to its citizens? Less partisan or ideological? In practice, of course, we want politicians to follow the lead of the majority only when we ourselves are in it. A politician who bucks popular opinion to do something we agree with is courageous. All of which suggests we could do without the polls themselves.

Things get stranger when we leave the political realm. As I write this, Pew has just released a report on the meaning of life. It surveyed nearly 19,000 people across seventeen advanced economies and concluded that Australians find more meaning in friends than in material well-being, while Canadians do the opposite. South Koreans and Japanese, meanwhile, are the most likely to name only one source of meaning. So long as Pew was taking up the topic, there’s another question I wish had been asked: How did we ever come to believe that surveys of this kind could tell us something significant about ourselves?

One version of the story begins in the middle of the seventeenth century, after the Thirty Years’ War left the Holy Roman Empire a patchwork of sovereign territories with uncertain borders, contentious relationships, and varied legal conventions. The resulting “weakness and need for self-definition,” the French researcher Alain Desrosières writes, created a demand among local rulers for “systematic cataloging.” This generally took the form of descriptive reports. Over time the proper methods and parameters of these reports became codified, and thus was born the discipline of Statistik: the systematic study of the attributes of a state.

As Germany was being consolidated in the nineteenth century, “certain officials proposed using the formal, detailed framework of descriptive statistics to present comparisons between the states” by way of tables in which “the countries appeared in rows, and different (literary) elements of the description appeared in columns.” In this way, a single feature, such as population or climate, could be easily removed from its context. Statistics went from being a method for creating a holistic description of one place to what Desrosières calls a “cognitive space of equivalence.” Once this change occurred, it was only a matter of time before the descriptions themselves were put into the language of equivalence, which is to say, numbers.

The development of statistical reasoning was central to the “project of legibility,” as the anthropologist James C. Scott calls it, ushered in by the rise of nation-states. Strong centralized governments, Scott writes in Seeing Like a State, required that local communities be made “legible,” their features abstracted to enable management by distant authorities. In some cases, such “state simplifications” occurred at the level of observation. Cadastral maps, for example, ignored local land-use customs, focusing instead on the points relevant to the state: How big was each plot, and who was responsible for paying taxes on it?

But legibility inevitably requires simplifying the underlying facts, often through coercion. The paradigmatic example here is postrevolutionary France. For administrative purposes, the country was divided into dozens of “departments” of roughly equal size whose boundaries were drawn to break up culturally cohesive regions such as Normandy and Provence. Local dialects were effectively banned, and use of the new, highly rational metric system was required. (As many commentators have noted, this work was a kind of domestic trial run for colonialism.)

One thing these centralized states did not need to make legible was their citizens’ opinions—on the state itself, or anything else for that matter. This was just as true of democratic regimes as authoritarian ones. What eventually helped bring about opinion polling was the rise of consumer capitalism, which created the need for market research.

But expanding the opinion poll beyond questions like “Pepsi or Coke?” required working out a few kinks. As the historian Theodore M. Porter notes, pollsters quickly learned that “logically equivalent forms of the same question produce quite different distributions of responses.” This fact might have led them to doubt the whole undertaking. Instead, they “enforced a strict discipline on employees and respondents,” instructing pollsters to “recite each question with exactly the same wording and in a specified order.” Subjects were then made “to choose one of a small number of packaged statements as the best expression of their opinions.”

This approach has become so familiar that it may be worth noting how odd it is to record people’s opinions on complex matters by asking them to choose among prefabricated options. Yet the method has its advantages. What it sacrifices in accuracy it makes up in pseudoscientific precision and quantifiability. Above all, the results are legible: the easiest way to be sure you understand what a person is telling you is to put your own words in his mouth.

Scott notes a kind of Heisenberg principle to state simplifications: “They frequently have the power to transform the facts they take note of.” This is another advantage to multiple-choice polling. If people are given a narrow range of opinions, they may well think that those are the only options available, and in choosing one, they may well accept it as wholly their own. Even those of us who reject the stricture of these options for ourselves are apt to believe that they fairly represent the opinions of others. One doesn’t have to be a postmodern relativist to suspect that what’s going on here is as much the construction of a reality as the depiction of one.

The rise of opinion polling required a second methodological breakthrough: random sampling, which made it possible to produce statistically significant results without having to canvass the entire population.

The idea that a sample of a few thousand people might offer insights into a population of millions or hundreds of millions is not intuitive. It was first proposed by the French mathematician Pierre-Simon Laplace in the 1800s, but it took another century for the practice of sampling to catch on. Even then it remained controversial, and statisticians disagreed about whether samples should be random or “purposively” selected—that is, chosen for their supposed representativeness.

As Desrosières notes, the key moment in the public acceptance of random-sample polling came in 1936, when Gallup used the method to forecast Franklin D. Roosevelt’s landslide reelection. Surveyors using other methods had predicted that Roosevelt would be swept out of office amid the continuing Depression. The gain in prestige for Gallup—which had been founded only the year before—was enormous. People have since largely taken for granted that random-sample polling could tell us meaningful things about the population as a whole. Gallup called sixteen of the next eighteen elections, missing the famous “Dewey Defeats Truman” upset and Jimmy Carter’s narrow win over Gerald Ford but correctly forecasting John F. Kennedy’s tight victory over Richard Nixon. That record was long the source of the company’s authority. Then came the 2012 election, which Gallup called for Mitt Romney even as most pollsters predicted Barack Obama’s reelection. Initially, Gallup pledged to improve its methods, but it ultimately announced that it would no longer participate in horse-race polling at all, instead putting its “time and money and brainpower into understanding the issues.”

And therein lies the problem with outfits such as Pew and Gallup and (perhaps) Monmouth giving up on elections: these events are “accountability moments” for pollsters as well as politicians, rare occasions when public surveying is forced to confront reality. There have been efforts to explain why elections are particularly challenging events to assess, but if anything the opposite would seem to be true. “Are you going to vote and, if so, for whom?” is a question whose range of answers can be far more easily reduced to multiple choice than “What do you think of abortion?” or “Are you hopeful for the future?” If polls cannot reliably give us the answer to the first question, why would we trust them with the others? At the time of Gallup’s announcement, its editor in chief, Frank Newport, acknowledged that elections provided “an external standard” by which its work could be judged, but he didn’t say what would be replacing them.

It’s perhaps not a coincidence that the same period in which horse-race polling broke down also saw a proliferation of opinion polls with results that were, on their face, completely absurd. Around the time that Gallup gave up on elections, for example, it told us that one in ten Americans believed Barack Obama to be a Muslim. Now, there’s no question that these results said something about American society, but it’s not clear exactly what. Outside the context of opinion polling, asking people whether they believe their president—an avowed Christian—to be a Muslim is a strange thing to do. It suggests that the answer is unsettled, or that it might in some way be up to the respondent. There is no personal cost to deciding one way or another—no penalty for being “wrong”—and answering in the affirmative is an easy way to express your distrust or contempt for a president with dark skin and a “foreign” name.

Incidentally, it’s also an easy way to express your distrust or contempt for the sorts of establishment institutions that are asking the question in the first place. One of the most significant features of contemporary conspiracy theorizing is precisely how incomprehensible it is to those of us in the “mainstream.” That seems to be part of the point. Scott writes that local cultures have always resisted efforts to make them legible, noting that every society tends “to modify, subvert, block, or even overturn the categories imposed upon it.” He explains that it is thus “useful to distinguish what might be called facts on paper from facts on the ground.”

Donald Trump’s 2016 election, his refusal to admit defeat in 2020, the January 6 insurrection: these were all unquestionably on-the-ground facts, and we have a responsibility to try to figure out their causes. But the widely reported statistic that 15 percent of Americans believe that sex-trafficking Satanic cannibals are running the government might be more of a paper fact.

If we want to know what people really think, we may need to start actually listening, rather than deciding in advance which opinions are possible and inviting people to choose among them. Perhaps what’s needed is something more like statistics in the old German sense—descriptive reports that capture our society in all its baroque weirdness. The results may be too peculiar for easy aggregation, but that is a risk we’ll have to take. We might find more answers than we imagined possible.

More from

“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now