Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access
Adjust

By Virginia Eubanks, from Automating Inequality, which was published this month by St. Martin’s Press. Eubanks is an associate professor of political science at the University at Albany, SUNY, and a founding member of the Our Data Bodies project.

Forty years ago, nearly all the major decisions that shape our lives—whether or not we are offered employment, a mortgage, insurance, credit, or a government service—were made by human beings. They often used actuarial processes that functioned more like computers than people, but human discretion still prevailed.

Today, we have ceded much of that decision-making power to machines. Automated eligibility systems, ranking algorithms, and predictive risk models control which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud. Our world is crisscrossed by information sentinels, some obvious and visible: closed-circuit cameras, GPS on our cell phones, police drones. But much of our information is collected by inscrutable, invisible pieces of code embedded in social media interactions, applications for government services, and every product we buy. They are so deeply woven into the fabric of social life that, most of the time, we don’t even notice that we are being watched and analyzed.

Even when we do notice, we rarely understand how these processes are taking place. There is no sunshine law to compel the government or private companies to release details on the inner workings of their digital decision-making systems. With the notable exception of credit reporting, we have remarkably limited access to the equations, algorithms, and models that shape our life chances.

We all live under this new regime of data analytics, but we don’t all experience it in the same way. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, stigmatized religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much heavier burden of monitoring, tracking, and social sorting than advantaged groups.

The most marginalized in our society face higher levels of data collection when they access public benefits, walk through heavily policed neighborhoods, enter the health care system, or cross national borders. That data reinforces their marginality when it is used to target them for extra scrutiny. Groups seen as undeserving of social support and political inclusion are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a feedback loop of injustice.

Take the case of Maine. In 2014, under Republican governor Paul LePage, the state attacked families who were receiving cash benefi­ts through a federal program called Temporary Assistance for Needy Families. TANF bene­fits are loaded onto EBT cards, which leave a digital record of when and where cash is withdrawn. LePage’s administration mined data collected by federal and state agencies to compile a list of 3,650 transactions in which TANF recipients withdrew cash from ATMs in smoke shops, liquor stores, and out-of-state locations. The data was then released to the public.

The transactions that were flagged as suspicious represented only 0.3 percent of the 1.1million cash withdrawals completed during that time period, and the data showed only where cash was withdrawn, not how it was spent. But the administration disclosed the data to suggest that TANF families were defrauding taxpayers by buying liquor, cigarettes, and lottery tickets. Lawmakers and the professional middle-class public eagerly embraced the misleading tale they spun.

The Maine legislature introduced a bill that would require TANF families to retain all cash receipts for twelve months, in order to facilitate state audits of their spending. Democratic legislators urged the state’s attorney general to use LePage’s list to investigate and prosecute fraud. The governor introduced a bill to ban TANF recipients from using their benefit cards at out-of-state ATMs. These proposed laws were patently unconstitutional and unenforceable, and would have been impossible to obey—but that was not the point. Such legislation is part of the performative politics governing poverty. It is not intended to work; it is intended to heap stigma on social programs and reinforce the misleading narrative that those who access public assistance are criminal, lazy, spendthrift addicts.

This has not been limited to Maine. Across the country, poor and working-class people are being targeted by new tools of digital poverty management, and face life-threatening consequences as a result. Vast networks of social services, law enforcement, and neighborhood-surveillance technology make their every move visible and offer up their behavior for scrutiny by the government, corporations, and the public.

Automated eligibility systems in Medicaid, TANF, and the Supplemental Nutrition Assistance Program discourage families from claiming benefits that they are entitled to and deserve. Predictive models in child welfare deem struggling parents to be risky and problematic. Coordinated entry systems, which match the most vulnerable unhoused people to available resources, collect personal information without adequate safeguards in place for privacy or data security.

These systems are being integrated into human and social services at a breathtaking pace, with little or no discussion about their impacts. Technology boosters rationalize the automation of decision-making in public services—they say we will be able to do more with less and get help to those who really need it. But programs that serve the poor are as unpopular as they have ever been. This is not a coincidence: technologies of poverty management are not neutral. They are shaped by our nation’s fear of economic insecurity and hatred of the poor.

The new tools of poverty management hide economic inequality from the professional middle-class public and give the nation the ethical distance it needs to make inhuman choices about who gets food and who starves, who has housing and who remains homeless, whose family stays together and whose is broken up by the state. This is part of a long American tradition. We manage the poor so that we do not have to eradicate poverty.

America’s poor and working-class people have long been subject to invasive surveillance, midnight raids, and punitive policies that increase the stigma and hardship of poverty. During the nineteenth century, they were quarantined in county poorhouses. In the twentieth century, they were investigated by caseworkers who treated them like criminals on trial. Today, we have forged a digital poorhouse. It promises to eclipse the reach of everything that came before.

The differences between the brick-and-mortar poorhouse of yesterday and the digital one of today are significant. Containment in a physical institution had the unintended result of creating class solidarity across the lines of race, gender, and national origin. If we sit at a common table to eat the same gruel, we might see similarities in our experiences. But now surveillance and digital social sorting are driving us apart, targeting smaller and smaller microgroups for different kinds of aggression and control. In an invisible poorhouse, we become ever more cut off from the people around us, even if they share our suffering.

In the 1820s, those who supported institutionalizing the indigent argued that there should be a poorhouse in every county in the United States. But it was expensive and time-consuming to build so many prisons for the poor—county poorhouses were difficult to scale (though we still ended up with more than a thousand of them). In the early twentieth century, the eugenicist Harry Laughlin proposed ending poverty by forcibly sterilizing the “lowest one tenth” of the nation’s population, approximately 15 million people. But Laughlin’s science fell out of favor after its use in Nazi Germany.

The digital poorhouse has a much lower barrier to expansion. Automated decision-making systems, matching algorithms, and predictive risk models have the potential to spread quickly. The state of Indiana denied more than a million public assistance applications in less than three years after switching to private call centers and automated document processing. In Los Angeles, a sorting survey to allocate housing for the homeless that started in a single neighborhood expanded to a countywide program in less than four years.

Models that identify children at risk of abuse and neglect are proliferating rapidly from New York City to Los Angeles and from Oklahoma to Oregon. Once they scale up, these digital systems will be remarkably hard to decommission. Oscar Gandy, a communications scholar at the University of Pennsylvania, developed a concept called rational discrimination that is key to understanding how the digital poorhouse automates inequality. Rational discrimination does not require class or racial hatred, or even unconscious bias, to operate. It requires only ignoring bias that already exists. When automated decision-making tools are not built to explicitly dismantle structural inequalities, their increased speed and vast scale intensify them dramatically.

Removing human discretion from public services may seem like a compelling solution to discrimination. After all, a computer treats each case consistently and without prejudice. But this actually has the potential to compound racial injustice. In the Eighties and Nineties, a series of laws establishing mandatory minimum sentences took away discretion from individual judges. Thirty years later, we have made little progress in rectifying racial disparity in the criminal justice system, and the incarcerated population has exploded. Though automated decision-making can streamline the governing process, and tracking program data can help identify patterns of biased decision-making, justice sometimes requires an ability to bend the rules. By transferring discretion from frontline social servants and moving it instead to engineers and data analysts, the digital poorhouse may, in fact, supercharge discrimination.

Think of the digital poorhouse as an invisible web woven of fiber-optic threads. Each strand functions as a microphone, a camera, a fingerprint scanner, a GPS tracker, a trip wire, and a crystal ball. Some of the strands are sticky. Along the threads travel petabytes of data. Our activities vibrate the web, disclosing our location and direction. Each of these filaments can be switched on or off. They reach back into history and forward into the future. They connect us in networks of association to those we know and love. As you go down the socioeconomic scale, the strands are woven more densely and more of them are switched on.

When my family was erroneously red-flagged for a health care fraud investigation in 2015, we had to wrestle only one strand. We weren’t also tangled in threads emerging from the criminal justice system, Medicaid, and child protective services. We weren’t knotted up in the histories of our parents or the patterns of our neighbors. We challenged a single strand of the digital poorhouse and we prevailed.

Eventually, however, those of us in the professional middle class may very well end up in the stickier, denser part of the web. As the working class hollows out and the economic ladder gets more crowded at the top and the bottom, the middle class becomes more likely to fall into poverty. Even without crossing the official poverty line, two thirds of Americans between the ages of twenty and sixty-five will at some point rely on a means-tested program for support.

The programs we encounter will be shaped by the contempt we held for their initial targets: the chronically poor. We will endure invasive and complicated procedures meant to divert us from public resources. Our worthiness, behavior, and social relations will be investigated, our missteps criminalized.

Because the digital poorhouse is networked, whole areas of middle-class life might suddenly be subject to scrutiny. Because the digital poorhouse serves as a continuous record, a behavior that is perfectly legal today but becomes criminal in the future could be targeted for retroactive prosecution. It would stand us all in good stead to remember that an infatuation with high-tech social sorting emerges most aggressively in countries plagued by severe inequality and governed by totalitarians, and here, a national catastrophe or a political regime change might justify the deployment of the digital poorhouse’s full surveillance capability across the class spectrum.

We have always lived in the world we built for the poor. We created a society that has no use for the disabled or the elderly, and therefore are cast aside when we are hurt or grow old. We measure human worth by the ability to earn a wage, then suffer in a world that undervalues care, community, and mutual aid. We base our economy on exploiting the labor of racial and ethnic minorities and watch lasting inequalities snuff out human potential. We see the world as inevitably riven by bloody competition and are left unable to recognize the many ways in which we cooperate and lift one another up.

When a very efficient technology is deployed against a scorned out-group in the absence of strong human rights protections, there is enormous potential for atrocity. Currently, the digital poorhouse concentrates administrative power in the hands of a small elite. Its integrated data systems and digital surveillance infrastructure offer a degree of control unrivaled in history. Automated tools for classifying the poor, left on their own, will produce towering inequalities unless we make an explicit commitment to forge another path. And yet we act as if justice will take care of itself.

If there is to be an alternative, we must build it purposefully, brick by brick and byte by byte.


| View All Issues |

January 2018

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug