Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access
August 2021 Issue [Miscellany]

The Undiscovered Country

Can suicide be predicted?

Mixed-media cyanotypes by Aline Smithson © The artist

[Miscellany]

The Undiscovered Country

Can suicide be predicted?
Adjust

I had heard there was a device in Sweden that could read your palm and tell you whether or not you would kill yourself. I wanted to try it; among other reasons, so that I could find out whether or not I would kill myself.

It’s the sort of thing a person really ought to know about himself. And yet nearly eight out of ten suicides deny, in their final conversation with a health care professional, that they are contemplating the act. There are plenty of good reasons for this—shame, fear of involuntary hospitalization—but what if there were an additional reason: What if many actually weren’t contemplating suicide, or were but somehow didn’t know it? The Swedish device, which is called the EDOR and is manufactured by a company called Emotra, was billed as “a new and objective method for assessing suicide risk,” one that “has proven itself amply in clinical practice.” It promised to tell you what you might not be in a position to intuit yourself. I wrote the company’s CEO, Daniel Poté, and awaited the ugly truth.

At the time, I was in an ideal position to consider the implications of this sort of invention. During the height of the pandemic, when offices were closed and everyone had begun staying indoors and leaving each other alone, I was given access to a small meeting room typically used by a psychoanalyst. It was where she saw her patients, back before analysis became something conducted entirely remotely. Leather chairs, tissue boxes, soothing lamps. The bookshelves were filled with works by D. W. Winnicott and R. D. Laing, and titles like Soul Murder and On the Nightmare. Reaching for something to read, I’d inevitably land on a book like Internal World and External Reality or The Restoration of the Self. Our inability to understand our own behavior, our own motivations, was a recurring topic.

If you are having thoughts of suicide, call the National Suicide Prevention Lifeline at 1-800-273-8255.

Also, I would be lying if I said the idea of suicide hadn’t ever occurred to me. Usually late at night—on nights when I couldn’t sleep and my thoughts began spiraling off in all sorts of boring and onerous directions—I couldn’t help thinking there was at least one surefire way to get to sleep, one approach that couldn’t fail. We tend to discuss suicidal ideation as if it is dramatic, rather than banal, but it is surely just as often the latter. The impulse can act as a sort of muscle relaxant, a release valve that also, perhaps not incidentally, corresponds to periods of persistent discomfort or despair, or some kind of admixture of the two. An anxiety that blurs the vision. “Please kill me right now,” Moses said to God as he wandered the desert. “It will be a kindness.” Every now and then, for a period of some years, one is unhappy and feels uninterested in one’s unhappiness, and everyone else seems unhappy, too. Isn’t this all, at the end of the day, a little tiresome? A little grotesque? “If you do not want to fight, you can run away,” Seneca said. “Do you ask what may be the way to freedom? Any vein in your body.” There is a certain amount of relief to be found in sentiments like this.* But what connects the common ideation to the uncommon attempt?

Poté proved difficult to pin down. There was mention of a “rights issue,” of the company “moving office,” of nonspecific “travels.” He kept postponing an interview, and offering increasingly ardent apologies. I told him I wanted to try the device, and he explained that they “do not have a system up and running in the U.S.” I presumptuously asked if he could mail me one, and he explained, “At the moment we have regulatory approval only in the E.U.,” and so to do this would be “problematic.” It is a medical device, he emphasized, “intended to be used in a medical setting.”

While reading Winnicott at the analyst’s office one morning—“At the start is an essential aloneness. At the same time this aloneness can only take place under maximum conditions of dependence”—I finally got a call from Poté. As we spoke, I looked up images of the EDOR on Emotra’s website. It was light blue and vaguely ovaloid, with two yellow strips in the center on which users were to place their index and middle fingers. (It didn’t read your palm after all, but the pads of your fingers.) Poté explained that the basic mechanism is comparable to the process “used in lie-detector tests, like you see in the movies.” It measured for what Emotra called “electrodermal hyporeactivity” by running a weak current over the skin as the sweat glands open and close. The association with suicide had been advertised by an experimental psychiatrist named Lars-Håkan Thorell, Emotra’s founder, a man I envisioned as an archetypal mad scientist.

I should mention that many neutral scientific observers disagree with Thorell’s interpretation of the evidence, and believe his claims to be radically overstated. His confidence, and will to commercialize, have about them the whiff of charlatanism. But Poté seemed optimistic. I asked about his ambitions for the device. “It would be great if it could be used as a standard, easy, cost-effective test,” he said. He imagined it being used all over the world. Patients could place their fingers on the box and before long know what they may not have been able to admit to themselves.

He also mentioned, more obscurely, that Emotra wasn’t sure exactly what it was the EDOR was determining about a person. That is, they understood the physical process being measured, but the connection between that process and a suicide attempt remained a mystery. “Long story short, when it comes to knowing what it is, we do not know,” he said. He added that it wasn’t depression, as not all depressives test positive on the device. It was something else. Some untold, autodestructive precursor or bug that dwells in the shadows of our physiology. He said, “We have found a disease or a state or a problem that doesn’t have a name.” Assuming they have found anything at all.

Monday is the most popular day of the week to kill yourself. On average, someone on earth dies by suicide about once every forty seconds; in America, it’s closer to once every eleven minutes. There are more than twice as many suicides here as homicides—it is the second leading cause of death for those aged fifteen to twenty-four, and the fourth for those aged eighteen to sixty-five. Women attempt suicide more often than men by a factor of three, but men succeed more often by a factor of four. Men die from suicide more often than women everywhere, except where they don’t—in China, for instance, where the opposite is true.

Over the past three decades, suicide rates have dropped by nearly half in Greenland and have almost doubled in South Korea. Young people with tattoos have been found more likely to die by suicide, as have users of heroin or Ambien, prison inmates, and farmers. Studies have shown that one’s propensity for suicidal thinking can be increased by sexual obsession or by firefighting. Patients who have undergone multiple surgeries are at a greater risk for suicide, but then so are the surgeons themselves, particularly if they are female.

Ritual suicide of various sorts has been recorded among the Kaliai people of Papua New Guinea and the Yuit of St. Lawrence Island. Suicidal behavior has been observed among worker bumblebees and Australian redback spiders and, in several cases, among dogs. Certain researchers have found that vivid nightmares might be more reliable predictors of suicide than depression or hopelessness, though other researchers disagree.

“Surely no other mysterious phenomenon of human activity has excited so little scientific investigation,” wrote the psychoanalyst Karl Menninger in the Thirties. Things have changed since Menninger’s day. Suicide is now a frantic nexus of scientific investigation, generating a library of research and statistics that, taken together, seems to perplex as much as it clarifies. A new discipline, suicidology, emerged in the Sixties to take up the question, developing its own jargon and schisms and methodologies. The terms of the discussion have evolved. A reader of a late-nineteenth-century edition of Chambers’s Encyclopaedia would find committing suicide described as a “heinous crime,” for which the punishments included “an ignominious burial in the highway, with a stake driven through the body.” Today the National Institute of Mental Health cautions against saying that a person “committed” suicide at all; better to say that they “completed” it, to avoid the implication of an illicit or criminal act.

The dream of scientific precision in psychiatry has culminated in a vision of suicide as less a private existential dilemma than a condition that can be anticipated with superhuman accuracy. “If one were to say that the prediction of suicide is no goal but a phantasm, a wraith,” wrote the psychologist James Diggory in the early Seventies, “we would have no present information with which to refute him.” Today’s researchers feel that we are in a very different position, and are tackling the problem with the heady zeal of the twenty-first-century technocrat. They are finding patterns in the data. They know us better than we know ourselves. They have conquered the phantasm, the wraith. We may never agree on what suicide means exactly, but this is no longer the pertinent question. That question is: Can we see it coming?

The scholarly study of suicide had a long if not particularly distinguished prehistory. Most famously, there was Émile Durkheim’s 1897 Suicide, which followed from the notion that the phenomenon is “dominantly social,” and thus “contemporaneous with some passing crisis affecting the social state.” The book prefigures some of the modern studies in its tortuous classifications and its reverence for data, but it’s far more useful—and in far wider use today—as a demonstration of sociological method than as a document with true and interesting things to tell us about suicide. Thomas Joiner, the editor of the academic journal Suicide and Life-Threatening Behavior, has argued that the book’s influence persisted largely because “Durkheim had little competition for decades.” Joiner attributed this to the long preeminence of psychoanalysis, from which, he wrote, “to be blunt, it is difficult to think of a lasting contribution.”

It’s true that Freud himself didn’t have much to say on the subject. He believed suicide could be understood as the result of certain unfortunate instances of “introjection,” in which the ego incorporated aspects of a loved and lost object into the self, and so redirected inward the subsequent destructive feelings toward that object. (Plausible enough.) But again, he didn’t seem particularly interested. When patients told Freud that their lives were hopeless or without purpose, he considered this an impressive display of self-awareness. “We can only wonder,” he wrote, “why a man must become ill before he can discover truth of this kind.”

Other psychoanalysts expanded on Freud’s approach, notably Menninger, who, examining various methods of suicide, read them as sublimated instances of, for example, “fellatio acted out violently” or “passive erotic submission” or “drowning phantasies.” Wilhelm Reich, in his 1927 monograph The Function of the Orgasm, wrote, “Patients committed suicide when their sexual energy had been stirred up but was prevented from attaining adequate discharge.” But then Reich saw insufficient orgasms as the root of every problem; this isn’t to say he was wrong, just predictable.

If there was a father of suicide prediction, it was Edwin Shneidman. He coined the term “suicidology,” in addition to founding the American Association of Suicidology in 1968; establishing and for many years editing Suicide and Life-Threatening Behavior; and, over the course of his career, publishing many morbid and influential texts with titles like Deaths of Man, Voices of Death, Definition of Suicide, The Suicidal Mind, Comprehending Suicide, and so on, before his death (of natural causes) in 2009. “I don’t know whether suicide was looking for me or I was looking for suicide,” he once told an interviewer.

Shneidman was the idiosyncratic philosopher-poet of the field, constantly generating neologisms (“psychache,” for instance, and “psychological autopsy”) and elaborating new metaphors for the act (such as the arboreal image, in which our biochemical states are “roots”). He exempted himself neither from the issue’s complexity nor from its grim allure. “I am against suicide committed by other people,” he wrote, “but I want to reserve that option for myself.” He described his dinner table as being crowded with photocopies of articles about death. The thought of it kept him awake at night.

It wasn’t only Shneidman who recognized a vacuum when he did. In the Fifties and Sixties, there emerged a small but dedicated band of rogue scholars who devoted themselves to the study of a subject that was then—and still largely remains—taboo. There was Eli Robins, who began knocking on doors and interviewing the surviving family members of suicides. And there was Aaron T. Beck, best known today as the inventor of cognitive behavioral therapy, who also developed early tools for clinical treatment, including the Beck Hopelessness Scale and the Beck Scale for Suicidal Ideation. What these men had in common was their patricidal repudiation of psychoanalysis. They had been steeped in the tradition, having undergone analysis themselves, and had found the project wanting. Beck claimed that he had set out to validate Freud’s theories in good faith, and was as surprised as anyone to discover that they were worthless. “As I pursued my investigations,” he wrote, “the various psychoanalytic concepts began to collapse like a stack of dominos.” Robins had undergone a similar transformation. He was rumored to keep a photo of Freud over the urinal in his department’s bathroom.

The skepticism that the early suicidologists felt toward psychoanalysis was symptomatic of a larger crisis in psychiatry. On the one hand, there were the critiques made by the so-called anti-psychiatry movement, which encompassed everything from the theories of Laing and Foucault to Thomas Szasz’s 1961 study The Myth of Mental Illness and Ken Kesey’s 1962 novel One Flew Over the Cuckoo’s Nest. Psychiatry was a coercive institution of the authoritarian state, the argument went, mental illness a social construct invented to exclude the transgressive other. (Here again: plausible enough.) More worrisome, however, insurance companies and even Congress had begun expressing their own doubts as to the field’s legitimacy. In the absence of clearly articulated diagnostic criteria or consistent clinical accountability, they wondered how they could justify continued financial support for psychiatric treatment.

The most tangible and significant result of this pressure was the third edition of the Diagnostic and Statistical Manual of Mental Disorders. The book was a dramatic expansion of its predecessors in terms of both length and aspiration. It introduced the precise definitions of mental disorders and standardized diagnostic criteria that the profession had lacked. In doing so, and not without controversy, it reimagined the discipline along aggressively positivist lines, aiming to replace interpretation and intuition with absolutist empirical rigor rooted in behaviorism and quantitative social research. If psychiatry were to become a true science, it would require a new epistemology.

“Imagine the thing we care about is five feet above the floor, and you’re shining a light on it, and it’s casting a shadow—we see the shadow.” I was talking with Colin Walsh, a suicide researcher at Vanderbilt University who identifies as an “informatician.” Having failed to convince the Swedes to send me their suicide machine, I had decided to speak to scholars whose work was actually respected in the field. “In measuring that shadow and its diameter, you get a sense of the risk,” Walsh went on. “But it’s not exactly what you want to get to, which is that core thing that’s above the floor.”

That core thing, that disease without a name, is the enigma at the heart of suicide prediction. But Walsh, an earnest and affable person with a background in data analytics, believed his models had the potential to come closer than most to discerning its shape. I had called to ask him about a headline-making study he’d recently published. In it, he explained that “traditional approaches to the prediction of suicide attempts” had been unsuccessful, and that he and his team had sought to overcome their limitations by applying machine learning to electronic health records, a novel technique in clinical psychology. He had claimed a measure of success, writing that they had developed “algorithms that accurately predicted future suicide attempts.” Walsh claimed that they could predict whether a person would attempt suicide within a week with a level of exactitude I found disconcerting.

Walsh’s acknowledgment of prior failure was nothing new for the field. In the 1974 anthology The Prediction of Suicide, the editors refer to a meeting hosted by the National Institute of Mental Health a few years earlier in Phoenix, at which the consensus view was that the discipline possessed a “very flimsy basis of knowledge.” The group agreed that “during previous decades the cart had been put before the horse.” They argued that too much energy had gone toward suicide prevention and that “too little effort had been made in establishing a firm empirical foundation for defining and ascertaining the causes.” Thus, their work should be oriented toward determining the real risk factors, toward finding some way to predict the unpredictable.

Half a century later, the field once again took stock of what it had learned, and the results were not encouraging. A team led by one of Walsh’s co-authors, Joe Franklin, performed a meta-analysis in 2016 of the previous five decades of research into suicidal risk factors. Franklin explained his findings in a talk at Yale that year: “We had a pretty good idea of who was going to be at risk and who wasn’t,” he told the audience. “We also assumed our knowledge . . . must have been steadily improving over time.” Standing at a lectern before an enormous screen, he showed a PowerPoint slide listing a handful of conditions that had long been considered powerful risk factors: past suicidal behavior, hopelessness, mental disorders, social isolation. Having completed their analysis, however, Franklin and his team discovered that they had been incorrect—that they were essentially nowhere on the subject. They didn’t know what mattered. “These results may be surprising and disappointing to many,” his study concluded, but our predictive capabilities, after fifty years, were “weak and inaccurate” and “only slightly better than chance.” Or, as Walsh put it to me on the phone, we were at that point unable to predict suicide with any more accuracy than a coin toss.

Franklin had recommended the field shift its focus from “risk factors” to “risk algorithms.” Statistical correlation itself was clearly not a particularly revealing phenomenon. In his lecture, he noted that for several years, deaths by suicide in the United States “have been powerfully correlated with things like the cost of bananas, average money per household spent on pets, and, my personal favorite, per capita consumption of chicken.” This is where the work of data scientists came in. Walsh didn’t look at any particular factors in isolation (electrodermal hyporeactivity, for instance); he looked at combinations. Better yet, he trained algorithms to look for them. “From a scientific perspective, it’s hard to think of a more complex problem than this one,” he told me. Given this complexity, there was no longer any reason to suspect that humans might be up to the task. Walsh had accepted this: “As the algorithms get more sophisticated,” he said with some hesitation, “our ability to interpret them and understand how they arrive at a conclusion sometimes gets more difficult.”

In other words, we may finally know something, but we do not necessarily know how we know it.

One afternoon in 1949, Shneidman was sent on an errand to the Los Angeles County coroner’s office, where he found, in the building’s vault, a trove of hundreds of suicide notes. In later years, some would point to this discovery as the moment at which suicidology was born. Shneidman spent years parsing the notes, convinced there was something concrete he might learn from them. Over the decades that followed, he became less and less sure. Or anyway, he became conflicted. He quoted Isaac Bashevis Singer: “I have read scores of letters from suicides, but none of them ever told the truth.”

Nevertheless, Shneidman had unwittingly inspired yet another current in the modern data-driven effort to predict suicide, as Tony Wood, the chair of the American Association of Suicidology, explained to me. Wood is also the co-founder of a tech company called Qntfy, which analyzes what he calls “digital life data”—social-media use, email, browsing history, streaming and media consumption—that users have consented to have tracked. He sees Qntfy as an extension of the work done by the first wave of suicidologists in the Fifties and Sixties, the continuation of a lineage. “Their data collection practices were very interesting to me,” he said. “They were quite persistent and quite forward-thinking about going out and collecting data.”

Rather than reading suicide notes or looking at health records, Qntfy considers every digitally mediated aspect of a person’s life. Its sample comes far closer, Wood believes, to offering a complete record of a person’s activities and preferences, their private thoughts and public presentation. I asked about Walsh’s research, and Wood pointed out that in looking only at health records, Walsh and his peers were working from a sample that was necessarily minuscule by comparison. “How many times have you been to a doctor and how many times have you sent a text message?” he asked. Of his own data set, he said, “This is kind of like getting people’s emotions and their thoughts and feelings right from the tap, right exactly from the person.” Not the shadow to which Walsh had alluded, but the object itself. “Just like your own social-media posts, I’d imagine—you’d get a sense over time of who Will is.”

I considered this. What could the program learn about me from my “digital life data”? It could read my conversations with family members and friends; with dead family members and estranged friends. It could see what I watched and listened to, the articles I read. That from the internet in the past year I have purchased socks and roach poison and headphones and a copy of Jonathan Spence’s The Memory Palace of Matteo Ricci. I wondered if it mattered whether or not we tell each other the truth—for the algorithm’s purposes, I mean, does it matter if we tell the truth? I have to assume that it doesn’t. Adam Phillips writes that psychoanalysis “works by attending to the patient’s side effects, what falls out of his pockets when he starts speaking.” Maybe the machine learning works the same way. The juxtapositions themselves—of purchases, images, text messages—might be the important thing, the sedimentary layers of ephemera the point. Otherwise, what could you learn about a person from his digital life except that he is, like all of us, ambivalent?

I had heard there was a device in Pittsburgh that could scan your brain and tell you whether or not you would kill yourself. According to the Philadelphia Inquirer, the University of Pittsburgh’s Endowed Chair in Suicide Studies, David Brent, had partnered with the Carnegie Mellon cognitive neuroscientist Marcel Just on a study that used fMRI brain scans to “predict who will attempt suicide,” an approach that had so far netted them a grant of almost $4 million from the National Institute of Mental Health. “Just as you’d videotape a golf swing and see what’s wrong with it, you would look at the brain scan and see what’s wrong with the thought,” Just told the newspaper. They were embarking on a yearslong study to advance the research. “It could give us a window into the suicidal mind that we don’t have now,” Brent said. I took the bait: I emailed Brent’s assistant, who penciled me in for a videoconference interview.

A few weeks later, an intelligent-looking, unshaven man in his late sixties wearing a bright-red T-shirt sat before me on my computer screen. An assortment of crystal prisms hung by the window of the small home office where he had been working while sheltering in place. He described his early forays into the field in the Eighties, interviewing family members of the deceased. “I found the problem wasn’t getting in the door—it was leaving,” he said. “Because nobody else would talk to them. I would say, Look, it’s not your fault. Or I’d say, Honestly, I don’t know how you could have predicted this.”

Over the years, faced with the fundamental uncertainty of the suicide researcher, he found himself drawn to a subset of the discipline that foregrounded the search for biomarkers—biological attributes that might indicate a patient’s suicidality. For a while he suspected the answer might be genetic. “We’re looking for the trait that’s really behind the trait,” he told the Boston Globe in 2008, of his research into suicide’s possible heritability. Among many other approaches, he has been involved in studies that measure levels of the stress hormone cortisol and the physical composition of the brain, analyzing cortical thickness or gray and white matter volume. The desire for a biologically objective indicator is understandable, given the unreliability of suicidal patients. In the 2017 paper he published with Just on their brain-imaging project, Brent cited the notoriously low rates at which patients report their plans to die, and argued that it proved a “compelling need to develop markers of suicide risk that do not rely on self-report.”

I read the quote back to him, and he demurred. “I said that,” he conceded, and paused for a while. “I wrote that, I think. I’m not sure it’s the strongest argument for what we’re doing now.” Having considered the realities of the clinical setting, “this idea that somebody is not going to admit that they’re suicidal, but they’re going to go into a scanner and cooperate with you . . . ” He paused again. “It’s absurd.” It wasn’t that he had downgraded his hopes for the brain-scan research, only the circumstances of its utility. “I would say I’ve shifted my expectations,” he said.

In the Nineties, Shneidman published a kind of manifesto warning that the discipline had “jumped too quickly into positivistic-behavioristic empiricistic sciences,” which, he believed, “are not capable of dealing with the phenomenology of human suicide acts.” Among the ideas he singled out for criticism was precisely the search for biological explanations, the way in which he’d begun to see suicide discussed “in terms of synapses, MAO inhibitors, bipolar depressions, and neurotransmitters, or any other reductionistic physical language.” I read the quote to Brent, and asked what he made of it. “First of all,” he said, “we’re trying to, at the biological level, look at how suicidal people think, so that’s not reductionistic at all.” He added, “It’s maybe somewhat mechanistic.”

The question triggered a memory for Brent that he found either irritating or funny or both. “He actually called me once,” he said, meaning Shneidman. “After I got off the phone with him, I kind of felt like I had failed the exam.” He was being given an award in Shneidman’s name, though he doesn’t remember the details, just the strong feeling of having let the man down. “I think he wanted to see whether I was a reductionistic jerk like everybody else,” he said, and then he laughed. The criticism was a dated one, but there was also some essential truth to it that seemed unresolved. At the time of their conversation, Brent said, he had conceived of suicide attempts as the clear end result of bad psychopathology. Shneidman thought it was far more complex than that, that there were other dimensions being overlooked. “And actually,” Brent said, “I think he was right.”

I read him another Shneidman quote, something that I had often been reminded of in my reading and conversations with researchers. It had taken on the quality of a mantra: “A discipline can be no more rigorous than its essential subject matter will permit.” I asked what it brought to mind, whether he felt it had any relevance to his work. “One of the problems with suicide,” he said after a time, “is that the person who killed themselves takes a lot of the answers with them.”

Robert Lowell once said that if humans had access to a button that would kill us instantly and painlessly, we would all press it sooner or later. If there were a switch to flip—“some little switch in the arm”—we would inevitably flip it. At a moment of weakness or a moment of strength, depending on your understanding of the act, we would all make the decision to die, if it were convenient enough.

Or anyway, Lowell might have said this. It’s hard to be sure. Ubiquitous though the citation has been in the curriculum of suicide studies—from Al Alvarez’s 1971 The Savage God through Christopher Belshaw’s 2008 Annihilation: The Sense and Significance of Death—I haven’t been able to locate the idea in Lowell’s poems or prose or interviews. Maybe he said it one day in conversation. It does, I guess, seem like the sort of thing he would say.

When he was sixteen, a good friend of mine, J., shot himself in the head one morning before school with a .22-caliber pistol—just flipped a switch and turned himself off. On the news, they said he was sometimes “troubled.” We had grown up together, spent the night together often, and had gone to school and church together, traveling every summer to dreary Christian camps in the woods. We’d sit in the back of the bus and trade CD binders. Sleep in rooms full of rusty bunk beds.

There was an assembly, a fluorescent-lit basketball court filled with teenagers crying. We were unsettled. Perhaps most unsettled of all was B., who had been one of our closest friends for years. He lived two blocks away, so as a kid I’d ride my bike over and we’d jump on the trampoline while his little Jack Russell terrier barked at us. We’d pedal to the gas station, or laugh at the internet in his living room. We played games like Myst and watched movies like The Rock, in which Nicolas Cage breaks into Alcatraz. After high school, B. joined the Army and hanged himself in a closet.

B. and J. would each be thirty-one now, and it is easy to wonder what they would be up to these days, but both of them are dead. It has been a long time, in fact, since either of them was alive. Maybe we should have scanned their brains. I suppose this is a way of framing the issue for the scholars of suicide: If we had a switch to flip, would we switch ourselves off? Would I? Is this really something you’d want to know about yourself with certainty, were it somehow possible to know it?

I heard an interesting story recently from a friend, a woman who wrote a number of children’s books in the Seventies in addition to co-writing an authoritative text on mind-altering drugs. It concerned the period after her father’s suicide, many years ago, when she fled to an ashram in India to meet the spiritual guru known as Bhagwan Shri Rajneesh, or “Osho” to his followers. She thought to herself: Here is a man with something to teach me. Toward the end of her stay, he gave each of the ashram’s guests a box, which he told them not to open. Except for her. It was important that she open hers, he said. Inside was his treatment for the grief and confusion from which she had been suffering—inside she would find what she needed to move on with her life. I won’t draw this out: There was nothing inside. The box was empty. As she told me this story, we were smoking cigarettes on the street in front of the analyst’s empty office. It was cold and wet. I asked whether she felt that Osho had answered the question she had gone to India to ask. She smiled and said yes, she believed that he had.

| View All Issues |

March 2022

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug