Six Questions — August 7, 2014, 8:00 am

Astra Taylor on The People’s Platform: Taking Back Power and Culture in the Digital Age

Astra Taylor discusses the potential and peril of the Internet as a tool for cultural democracy

Astra Taylor © Deborah Degraffenried

Astra Taylor © Deborah Degraffenried

In her new book, The People’s Platform, Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource. I asked her six questions about her book:

1. What were the benefits and challenges of presenting philosophical discussions in writing rather than in film?

There is so much more space to communicate in a book. Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story. When you’re writing, you can incorporate digressions, caveats, counterarguments, and footnotes that would cause a movie to lose all of its momentum. Of course cinema communicates on other registers: there are visual and sonic elements, video editing has its own logic, there are facial expressions and body language to interpret, and so on. But with the book I was relieved to work on something that didn’t require a big budget or a crew or insurance or worrying about whether it was going to rain. Still, I miss filmmaking, which is why I’m starting work on another philosophy documentary, this one about democracy and political theory.

2. How did you come to realize that the predictions made by techno-optimists about the Internet were flawed?

In 2009 I attended a conference about Web 2.0 and felt something was amiss. There was this conflation of communal creative spirit and capitalist spunk; the general assumption was that a combination of networked technology and the free market was going to overthrow the old order and usher in this new egalitarian age. The speakers and audience were all deeply critical of legacy media, of Hollywood and newspapers and the Recording Industry Association of America (and rightly so; there are lots of things to criticize them all for) but they refused to apply the same skeptical sensibility to the emerging Internet giants, which they were mindlessly cheering. But why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.

From where I stood it seemed obvious that the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors. And that was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous than the newspaper and television spots of yore.

3. I notice that you continue to use Twitter, even as you recognize the flaws in some of its business practices. How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?

I can’t wholeheartedly say I enjoy Twitter, or at least I’m enjoying it less and less these days, but I don’t think there’s a contradiction. I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse, since you never know when there will be a significant change to the terms of service or to the service itself. Facebook’s invitation to “pay to promote” status updates was a tipping point for me and I jumped ship.

Obviously there are benefits to be reaped from social media, but we should also be conscious of how much we give in return. I’m not the first to point out that these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.

It’s important to keep this bigger picture in mind and focus less on our individual use habits. We hear a lot about whether people are on social media too much or are addicted to their smartphones — the implication being that we need to muster more willpower and regularly “detox” from our devices. It’s also common in discussions of online privacy to hear commenters say users need to practice “good digital hygiene” by adopting encryption or having stronger passwords. Sure, I agree that time away from our phones is healthy and that encryption is important, but I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently. As for digital hygiene, privacy isn’t like toothbrushing; our privacy is being systematically violated by state and private interests in a way that our mouths aren’t. We can and should adjust our behavior where and when it’s appropriate, but structural factors are paramount in my view.

4. The Internet has been heralded by some as the “great equalizer.” Yet you demonstrate that social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?

That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.) But just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem. How can we adapt hard-won protections to this new, networked landscape? It’s a fascinating and complicated issue, and fortunately some smart legal scholars, for example Danielle Citron and Mary Anne Franks, are taking it seriously.

5. You use the expression “starving in the midst of plenty” to describe today’s digital climate, where commercial media overwhelmingly reigns despite the vast quantity of media accessible to us. What steps can we take to encourage better representation of independent and non-commercial media?

We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.

The People’s Platform, by Astra Taylor6. You advocate for greater government regulation of the Internet. Why is this important?

I wouldn’t say “regulation of the Internet,” which is a common phrase but a misleading one. I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight. The ongoing battles between Amazon and the publisher Hachette, and between YouTube and independent record labels are good examples of how crucial antitrust protection is.  The United States could learn a lot on this front from Europe, where officials are being far more aggressive.

In the book I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real. Related to this, there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content, having opened video-production studios in various cities, and offered fiber-optic broadband in select cities, not to mention all of its other endeavors and acquisitions, including robotics, home appliances, satellites, you name it.

So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.

Share
Single Page

Get access to 168 years of
Harper’s for only $45.99

United States Canada

CATEGORIES

THE CURRENT ISSUE

August 2018

How to Start a Nuclear War

= Subscribers only.
Sign in here.
Subscribe here.

Combustion Engines

= Subscribers only.
Sign in here.
Subscribe here.

There Will Always Be Fires

= Subscribers only.
Sign in here.
Subscribe here.

The End of Eden

= Subscribers only.
Sign in here.
Subscribe here.

view Table Content

FEATURED ON HARPERS.ORG

Article
Combustion Engines·

= Subscribers only.
Sign in here.
Subscribe here.

On any given day last summer, the smoke-choked skies over Missoula, Montana, swarmed with an average of twenty-eight helicopters and eighteen fixed-wing craft, a blitz waged against Lolo Peak, Rice Ridge, and ninety-six other wildfires in the Lolo National Forest. On the ground, forty or fifty twenty-person handcrews were deployed, alongside hundreds of fire engines and bulldozers. In the battle against Rice Ridge alone, the Air Force, handcrews, loggers, dozers, parachutists, flacks, forecasters, and cooks amounted to some nine hundred people.

Rice Ridge was what is known as a mega-fire, a recently coined term for blazes that cover more than 100,000 acres. The West has always known forest fires, of course, but for much of the past century, they rarely got any bigger than 10,000 acres. No more. In 1988, a 250,000-acre anomaly, Canyon Creek, burned for months, roaring across a forty-mile stretch of Montana’s Bob Marshall Wilderness in a single night. A few decades on, that anomaly is becoming the norm. Rice Ridge, for its part, swept through 160,000 acres.

At this scale, the firefighting operation is run by an incident management team, a group of about thirty specialists drawn from a mix of state and federal agencies and trained in fields ranging from aviation to weather forecasting and accounting to public information. The management teams are ranked according to experience and ability, from type 3 (the least skilled) to type 1 (the most). The fiercest fires are assigned to type 1s. Teams take the name of their incident commander, the field general, and some of those names become recognizable, even illustrious, in the wildfire-fighting community. One such name is that of Greg Poncin, who is to fire commanders what Wyatt Earp was to federal marshals.

Smoke from the Lolo Peak fire (detail) © Laura Verhaeghe
Article
There Will Always Be Fires·

= Subscribers only.
Sign in here.
Subscribe here.

The pinhal interior, a wooded region of hills and narrow hollows in rural central Portugal, used to be farmland. Well into the latter half of the past century, the fields were worked by peasants from the old stone villages. Portugal was poor and isolated, and the pinhal interior particularly so; when they could, the peasants left. There is electricity and running water now, but most of the people have gone. The fields have been taken over by trees. Each year the forest encroaches farther, and each year the villages grow more lonely. There are remnants of the earlier life, though, and amid the trees the holdouts of the older generations still work a few small fields. The pinhal interior cannot yet be called wilderness, then, and that, in large part, is why it burns.

Thousands of fires burn in the region each summer, almost all of them started not by lightning or some other natural spark but by the remaining Portuguese. (The great majority of the blazes are started unintentionally, though not all.) The pinhal interior—the name means “interior pine forest,” though today there is at least as much eucalyptus as pine—stretches along a sort of climate border between the semiarid Iberian interior and the wet influence of the Atlantic; vegetation grows exceptionally well there, and in the summers fire conditions are ideal. Still, most of the burns are quickly contained, and although they have grown larger in recent years, residents have learned to pay them little mind. The creeping fire that began in the dry duff and twigs of an oak grove on June 17 of last year, in the district of Pe­drógão Grande, therefore occasioned no panic.

A local woman, Dora da Silva Co­sta, drove past the blaze in the midafternoon, by which time it had entered a stand of pines. Firefighters were on hand. “There were no people in the streets,” Costa told me. “It was just another fire.” She continued on her way. It was a Saturday, and she had brought her two young sons to visit their older cousin in Vila Facaia, the village of small farms in which she’d been raised.

Firefighters near Pedrógão Grande (detail) © Pablo Blazquez Dominguez/Getty Images
Article
The End of Eden·

= Subscribers only.
Sign in here.
Subscribe here.

On a blistering morning in July 2017, Ghazi Luaibi rose before dawn and set out in a worn black sedan from his home in Zubair, a town of concrete low-rises in southern Iraq. He drove for a while along sandy roads strewn with plastic bags. On the horizon, he could see gas flares from the oil refineries, pillars of amber flame rising into the sky. As he approached Basra, the largest city in the province, desert scrub gave way to empty apartment blocks and rows of withered palms. Though the sun had barely risen, the temperature was already nearing 100 degrees Fahrenheit. The previous year, Basra had registered one of the highest temperatures ever reliably recorded on earth: about 129 degrees, hot enough to cause birds to drop from the sky.

Ghazi, a sixty-two-year-old with stooped shoulders, an ash-gray beard, and lively brown eyes, would have preferred to stay home and wait out the heat. But he hadn’t had much of a choice. He was the president of the local council of Mandaeans, members of a gnostic religion that appeared in Mesopotamia in the early centuries ad. Today marked the beginning of their new year, and Ghazi, who was born into the Mandaean priestly class, was responsible for making sure everything went smoothly: he needed to find a tent to shield worshippers from the sun and, most importantly, a location near flowing water where they could carry out the ceremony.

Mandaean holidays are celebrated with a mass baptism, a ritual that is deeply rooted in their scripture and theology. Mandaeans follow the teachings of Yahia Yuhana, known to Christians as John the Baptist. Water is central to their religion. They believe that all life originates in the World of Light, a spiritual realm that is the starting point for a great river known as Yardana, or Jordan. Outside the World of Light lie the lifeless, stagnant waters of the World of Darkness. According to one version of the Mandaean creation myth, a demiurge named Ptahil set out to shape a new world from the World of Darkness, which became the material world we inhabit today. Once the world was complete, Ptahil sculpted Adam, the first man, from the same dark waters as the earth, but his soul came from the World of Light. In Mandaean scripture, rivers are manifestations of the World of Light, coursing from the heavenly Jordan to the earth to purify it. To be baptized is to be immersed in this divine realm.

Basra General Hospital (detail) July 2017 © Alex Potter
Article
How to Start a Nuclear War·

= Subscribers only.
Sign in here.
Subscribe here.

Serving as a US Air Force launch control officer for intercontinental missiles in the early Seventies, First Lieutenant Bruce Blair figured out how to start a nuclear war and kill a few hundred million people. His unit, stationed in the vast missile fields at Malmstrom Air Force Base, in Montana, oversaw one of four squadrons of Minuteman II ­ICBMs, each missile topped by a W56 thermonuclear warhead with an explosive force of 1.2 megatons—eighty times that of the bomb that destroyed Hiroshima. In theory, the missiles could be fired only by order of the president of the United States, and required mutual cooperation by the two men on duty in each of the launch control centers, of which there were five for each squadron.

In fact, as Blair recounted to me recently, the system could be bypassed with remarkable ease. Safeguards made it difficult, though not impossible, for a two-man crew (of either captains or lieutenants, some straight out of college) in a single launch control center to fire a missile. But, said Blair, “it took only a small conspiracy”—of two people in two separate control centers—to launch the entire squadron of fifty missiles, “sixty megatons targeted at the Soviet Union, China, and North Korea.” (The scheme would first necessitate the “disabling” of the conspirators’ silo crewmates, unless, of course, they, too, were complicit in the operation.) Working in conjunction, the plotters could “jury-rig the system” to send a “vote” by turning keys in their separate launch centers. The three other launch centers might see what was happening, but they would not be able to override the two votes, and the missiles would begin their firing sequence. Even more alarmingly, Blair discovered that if one of the plotters was posted at the particular launch control center in overall command of the squadron, they could together format and transmit a “valid and authentic launch order” for general nuclear war that would immediately launch the entire US strategic nuclear missile force, including a thousand Minuteman and fifty-four Titan missiles, without the possibility of recall. As he put it, “that would get everyone’s attention, for sure.” A more pacifically inclined conspiracy, on the other hand, could effectively disarm the strategic force by formatting and transmitting messages invalidating the presidential launch codes.

When he quit the Air Force in 1974, Blair was haunted by the power that had been within his grasp, andhe resolved to do something about it. But when he started lobbying his former superiors, he was met with indifference and even active hostility. “I got in a fair scrap with the Air Force over it,” he recalled. As Blair well knew, there was supposed to be a system already in place to prevent that type of unilateral launch. The civilian leadership in the Pentagon took comfort in this, not knowing that the Strategic Air Command, which then controlled the Air Force’s nuclear weapons, had quietly neutralized it.

This reluctance to implement an obviously desirable precaution might seem extraordinary, but it is explicable in light of the dominant theme in the military’s nuclear weapons culture: the strategy known as “launch under attack.” Theoretically, the president has the option of waiting through an attack before deciding how to respond. But in practice, the system of command and control has been organized so as to leave a president facing reports of incoming missiles with little option but to launch. In the words of Lee Butler, who commanded all US nuclear forces at the end of the Cold War, the system the military designed was “structured to drive the president invariably toward a decision to launch under attack” if he or she believes there is “incontrovertible proof that warheads actually are on the way.” Ensuring that all missiles and bombers would be en route before any enemy missiles actually landed meant that most of the targets in the strategic nuclear war plan would be destroyed—thereby justifying the purchase and deployment of the massive force required to execute such a strike.

Among students of nuclear command and control, this practice of precluding all options but the desired one is known as “jamming” the president. Blair’s irksome protests threatened to slow this process. When his pleas drew rejection from inside the system, he turned to Congress. Eventually the Air Force agreed to begin using “unlock codes”—codes transmitted at the time of the launch order by higher authority without which the crews could not fire—on the weapons in 1977. (Even then, the Navy held off safeguarding its submarine-launched nuclear missiles in this way for another twenty years.)

Following this small victory, Blair continued to probe the baroque architecture of nuclear command and control, and its extreme vulnerability to lethal mishap. In the early Eighties, while working with a top-secret clearance for the Office of Technology Assessment, he prepared a detailed report on such shortcomings. The Pentagon promptly classified it as SIOP-ESI—a level higher than top secret. (SIOP stands for Single Integrated Operational Plan, the US plan for conducting a nuclear war. ESI stands for Extremely Sensitive Information.) Hidden away in the Pentagon, the report was withheld from both relevant senior civilian officials and the very congressional committees that had commissioned it in the first place.

From positions in Washington’s national security think tanks, including the Brookings Institution, Blair used his expertise and scholarly approach to gain access to knowledgeable insiders at the highest ranks, even in Moscow. On visits to the Russian capital during the halcyon years between the Cold War’s end and the renewal of tensions in the twenty-first century, he learned that the Soviet Union had actually developed a “dead hand” in ultimate control of their strategic nuclear arsenal. If sensors detected signs of an enemy nuclear attack, the USSR’s entire missile force would immediately launch with a minimum of human intervention—in effect, the doomsday weapon that ends the world in Dr. Strangelove.

Needless to say, this was a tightly held arrangement, known only to a select few in Moscow. Similarly chilling secrets, Blair continued to learn, lurked in the bowels of the US system, often unknown to the civilian leadership that supposedly directed it. In 1998, for example, on a visit to the headquarters of Strategic Command (­STRATCOM), the force controlling all US strategic nuclear weapons, at Offutt Air Force Base, near Omaha, Nebraska, he discovered that the ­­­STRATCOM targeting staff had unilaterally chosen to interpret a presidential order on nuclear targeting in such a way as to reinsert China into the ­SIOP, from which it had been removed in 1982, thereby provisionally consigning a billion Chinese to nuclear immolation. Shortly thereafter, he informed a senior White House official, whose reaction Blair recalled as “surprised” and “befuddled.”

In 2006, Blair founded Global Zero, an organization dedicated to ridding the world of nuclear weapons, with an immediate goal of ending the policy of launch under attack. By that time, the Cold War that had generated the ­SIOP and all those nuclear weapons had long since come to an end. As a result, part of the nuclear war machine had been dismantled—warhead numbers were reduced, bombers taken off alert, weapons withdrawn from Europe. But at its heart, the system continued unchanged, officially ever alert and smooth running, poised to dispatch hundreds of precisely targeted weapons, but only on receipt of an order from the commander in chief.

Bombhead, by Bruce Conner (detail) © Conner Family Trust, San Francisco, and ARS, New York City. Courtesy Kohn Gallery, Los Angeles

Minimum cost of a “pleasure palace” being built for Vladimir Putin:

$1,000,000,000

Israeli researchers claimed to have identified a ruthlessness gene.

Trump and Putin puzzle out cybersecurity in Helsinki, John Kelly didn't like his breakfast in Brussels, and a family of woodchucks ate the wiring in Paul Ryan's car

Subscribe to the Weekly Review newsletter. Don’t worry, we won’t sell your email address!

HARPER’S FINEST

Happiness Is a Worn Gun

By

Illustration by Stan Fellows

Illustration by Stan Fellows

“Nowadays, most states let just about anybody who wants a concealed-handgun permit have one; in seventeen states, you don’t even have to be a resident. Nobody knows exactly how many Americans carry guns, because not all states release their numbers, and even if they did, not all permit holders carry all the time. But it’s safe to assume that as many as 6 million Americans are walking around with firearms under their clothes.”

Subscribe Today