Facts, notoriously, do not care about our feelings. They are not subjective, but objective. The “I” who experiences emotion is located in time and space, the owner of a single window on reality. Facts, on the other hand, are general and universal. The philosopher Thomas Nagel called his book on objectivity The View from Nowhere, and there is something eternally seductive about the thought of breaking out of the confines of the single window, seeing beyond its little square. “We may,” Nagel wrote, “think of reality as a set of concentric spheres, progressively revealed as we detach gradually from the contingencies of the self.”
We might hear in Nagel’s cosmic detachment an echo of anatta—the Buddhist doctrine that there is no essence or soul grounding human existence. For Buddhists, the clear light of reality is visible only to those who abandon the illusion of selfhood. Objectivity, in the way non-Buddhists usually think about it, doesn’t erase the self, even if it involves a flight from individuality. It actually seems to make the self more powerful, more authoritative. The capacity to be objective is seen as something to strive for, an overcoming of the cognitive biases that smear or smudge the single window and impair our ability to see the world “as it really is.” Objectivity is earned through rigor and discipline. It is selfhood augmented.
In the summer of 2020, the National Museum of African American History and Culture was forced to apologize for offering an educational presentation that listed aspects of “white culture,” including such eyebrow-raising entries as “cause and effect relationships,” “plan for future,” and “objective, rational linear thinking.” Though it claimed to be an antiracist guide, the slide and its blandly definitive tone became an easy target for conservatives eager to undermine the civil rights movement that was exploding after the murder of George Floyd. “Objectivity” also appears on a poster I saw recently, produced by a diversity consultancy more than twenty years ago, under the heading characteristics of white supremacy culture. It is illustrated as a bottle of poison, alongside other toxic substances such as “either/or thinking” and “worship of the written word.”
The idea that objectivity might be poisonous seems to open the way to a kind of brain-melting relativism, the end of the possibility of knowledge itself. For many years, the biologist and culture warrior Richard Dawkins has raised the alarm, in often melodramatic fashion:
Show me a cultural relativist at thirty thousand feet and I’ll show you a hypocrite. . . . If you are flying to an international congress of anthropologists or literary critics, the reason you will probably get there—the reason you don’t plummet into a ploughed field—is that a lot of Western scientifically trained engineers have got their sums right.
One of the ironies of our overheated present moment is that the rationalist panic about social constructionism has become aligned with the Christian right’s panic about moral relativism. If divine truth is single and universal, then multiplicity and relativism are the signatures of evil. Could undermining objectivity literally be the work of the devil?
If we take a breath and sift through the DEI (diversity, equity, and inclusion) literature in which objectivity is negatively characterized, there’s disappointingly meager evidence of a diabolical plot to obliterate shared reality, or to force engineers to consider cultural factors when calculating thrust and lift. Rather it appears to be a way to talk about the more modest domain of institutional power relations. In a school or office, “objectivity” can be a pose or performance, a way to claim authority or deny it to those whose behavior doesn’t conform. The claim is not that the speed of light depends on how you feel, man, but that certain kinds of affectless social performance are coded as white and used to police non-white people. The opposite of “objective” in this case would be the stereotypical “angry black woman.” If you’re expected to behave emotionally and “irrationally,” you will be forced to adopt a rigorously neutral demeanor or find yourself at a disadvantage compared with others whose objectivity is assumed. In a Supreme Court confirmation hearing, Brett Kavanaugh can raise his voice and talk about beer, whereas Ketanji Brown Jackson has to demonstrate angelic calm.
In academia, there’s a more substantial debate about knowledge and objectivity. Can you know things by virtue of your “identity,” or your position in a social system? Are certain kinds of knowledge in some way subjective, rather than objective? So-called standpoint epistemology chips away, as the name suggests, at the idea of a disembodied, universal knower, and its ideas find their way, in sometimes garbled fashion, into the materials produced by DEI consultants. Though identity is a vague and messy concept, and doesn’t seem like a secure foundation for knowledge, it is surely true that underlings have insights into an organization that the boss doesn’t, or that the peasant who has grown up under the mountain understands it in a way that is different—and perhaps more useful to a climber—than the geographer who can measure the height of the peak.
Objectivity, in fact, was not always a cornerstone of the scientific method. In their history of the subject, Lorraine Daston and Peter Galison examine eighteenth-century scientific atlases, which sought to present typical specimens of plants and animals. Individual examples were “corrected” by scientists and the illustrators who assisted them, to help them conform to the archetypes that were seen as the true objects of inquiry. “The mere idea of an archetype in general implies that no particular animal can be used as our point of comparison,” Goethe wrote in a treatise on animal skeletons. “The particular can never serve as a pattern for the whole.” The scientist, in that period, was someone whose experience and wisdom allowed him to discern the pattern underlying his collection of specimens. He wasn’t absent, or restraining himself in the name of being objective. His trained subjectivity was an important tool, a machine for detecting archetypal signals through the world’s noise.
Daston and Galison point out, presumably to avoid an irate call from Dawkins, that objectivity is not quite the same as “truth” or “certainty” or “precision.” In their account, the aspiration to a knowledge that bears no trace of the knower only took shape in the middle of the nineteenth century, with the advent of photography. The camera offered “blind sight,” an objective vision free from the scientist’s intrusive presence. The photographic image became—and remained for at least a century—a guiding ideal, offering scientists the possibility of representing the world without succumbing to the human temptation to interpret or beautify.
In the twentieth century, the quest for objectivity became more rarefied and abstract. Vision, once a cornerstone of objectivity, fell under suspicion. Color perception is highly subjective, and photography turned out to involve all sorts of human choices and interventions. Scientists and philosophers began to seek objectivity in structures, the invariant relationships between things. The point was to reveal, as the logician Rudolf Carnap put it, an “objective world, which can be conceptually grasped and is indeed identical for all subjects.” The individual, the changeable, and the subjective were, as ever, the enemies of this way of thinking, which sought truths that would persist through every conceivable shift in perspective and circumstance.
The goal was not just to find theories that were useful, but to confirm that some theories were objectively true. It wasn’t enough to say that something worked. Scientists had to show how and why. In the twenty-first century, arguably the greatest threat to that commitment has come not from pink-haired cultural relativists, but from software engineers. In 2008, Chris Anderson, the former editor of Wired magazine, argued that big data was making the scientific method obsolete:
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
We don’t need explanations, Anderson argues. We don’t need to know why things happen. The world is as it is. We have tools to help us find our way through it, so why go through the hassle of making hypotheses and testing them?
The idea that numbers can “speak for themselves” sounds familiar. It’s another version of the ideal of objectivity: no human intervention, no interpretation, no theory. But when Anderson writes about “massive amounts of data and applied mathematics,” he means statistics, and statistics have a great deal of subjectivity baked in.
When we think of statistical objectivity, we tend to imagine something that emerges from repetition, from long runs of results. You toss a coin and note whether it comes out heads or tails. After you’ve done this many times, you can estimate the probability of each outcome. This may be straightforward with a coin toss, but when you’re trying to determine the probabilities of complex phenomena, you are confronted with the question of what’s relevant and what’s not. In the 1860s, John Venn (he of the diagram) wrote that
every individual thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite number of different classes of things.
It’s hard to design a model if you can’t draw a neat circle around the system you’re trying to represent. The weather is not the same as a coin toss.
Underlying much of big data is an approach to statistics that sidesteps this issue, by abandoning the “view from nowhere.” Bayesian statistics is named for the eighteenth-century Presbyterian minister who formulated it—apparently as a hobby, since he never published his work. Instead of a world seen from nowhere, it takes the view of an observer who can’t see much of anything. Someone like us, perhaps, lost in the forest of signs. The Bayesian seeker starts by making bets, assigning initial probabilities to various possible outcomes of the system whose behavior she wants to predict. Each result is then used to modify those bets. Bayesian probability is thus a measure of the strength of your current belief in something; in other words, how willing you are to wager that something is true. An automated Bayesian model starts out with a guess—perhaps an informed one, perhaps not—but as those guesses are honed by experience, it gets better at whatever task it’s been assigned to do, whether that’s predicting stock prices or deciding whether it’s looking at a picture of a cat. Without knowing anything, without a theory or a sense of what’s important, the machine begins to understand the world.
Or does it? Anderson is certainly right to say that machine learning is immensely powerful. Data gathered from the world is turned into action. Walmart discovers that before a hurricane, people buy Pop-Tarts and beer, so it sends more trucks to Florida. Why Pop-Tarts? Who cares? A political campaign tweaks its ads to drive engagement from its base. Ad A works better than ad B. The strategists don’t know why. They just know their guy won. In a certain way, machine learning is a triumph of impersonal objectivity. Human beings are so decentered that without considerable effort, it’s impossible to understand how an ML system comes to a particular conclusion. All we know, in most cases, is that we input the data and an answer comes out. Is this knowledge? If so, who is the knower? Not us.
These results may turn out to be “true” in that they allow us to make correct predictions. They may also be misleading. Machine learning models are not neutral. As with photography, the technology launders subjectivity into objectivity. Humans tweak the systems, keeping them in the sweet spot where they don’t think everything is a cat or recognize only the cat pictures they were shown in training. Their output may feel objective, but they’re shaped by human hands. In some ways we’re still in the world of the eighteenth-century naturalist, peering over the illustrator’s shoulder, correcting the shape of a leaf. Slightly longer, slightly wider. We tell ourselves that this is the view from nowhere, but who are we kidding?