Get Access to Print and Digital for $23.99 per year.
Subscribe for Full Access
Listening for the future of music

Illustrations by Toma Vagner

Jean-François Laporte stood on the stage of a Georgia Tech auditorium, manning the helm of what could have been a device of extraterrestrial origin. What looked to be a keyboard stand had been outfitted with an audience-facing drumhead at either end, between which lay a hopelessly tangled nest of tubes, knobs, and cables. In the center of the contraption, a PVC pipe hung down and curled under itself like a great proboscis. Jean-François—a bald, fifty-five-year-old Québécois in a slim-fitting black T-shirt and Coke-bottle glasses—was perched imposingly above it all, looking not unlike a futuristic aviator piloting a giant, cybernetic wasp.

Gravely serious, Jean-François began flicking at the taut latex drumheads, pulling at valves, and tweaking dials. The machine responded with a series of chirps and squeaks, then a sequence of electronic pops, some anguished ululations, the squeal of a deflating balloon, and finally, a low-register woodwind hum. After a few noisy minutes, everything fell silent; the performance was over. The handful of people scattered throughout the auditorium offered applause. A few nodded and scribbled notes.

The Babel Table, as he’d named it, was one of ten finalists in the 2024 Guthman Musical Instrument Competition, an annual event held to identify and promote the “newest and greatest ideas in music.” The Babel Table’s competitors, scattered around the stage, consisted of a number of other newly designed instruments, only a few of which—to the amateur’s eye, at least—were recognizable as instruments in the first place, their appearances ranging from a small, unassuming wooden box to what could have been mistaken for a military-grade rocket launcher.

Judging the Guthman were three former contenders: Bosko Kante, a Grammy Award–winning musician featured on earworms by Dua Lipa, Kanye West, and Big Boi; Kelly Snook, an ex-NASA scientist turned something called “data sonification researcher”; and Andrew McPherson, the head of the venerable-sounding Augmented Instruments Laboratory at Imperial College London. All three were Guthman royalty, having been conferred prizes in previous competitions—Kante for his ElectroSpit Talkbox (about which no less a luminary than T-Pain himself concluded, “This is fucking cool!”), Snook for her MiMU Gloves (employed most notably by the two-time Grammy winner Imogen Heap), and McPherson for his magnetic resonator piano (seemingly just a cool contrivance with neither commercial potential nor pop-star endorsement). This trio would hear live demos, fiddle with the instruments, and interrogate their inventors before declaring a winner. For the grand finale, contestants would be paired with local musicians and given a few hours to plan—or plan to improvise—a brief performance for whatever audience existed in Atlanta for this sort of thing. Georgia Tech branded it the “jam session of the future.”

Jean-François has a distinguished résumé for an avant-garde composer, earning accolades across the Great White North. He grew up in Montreal aspiring to be a housebuilder, but then lucked into an exchange program and traveled to Kinshasa, in what was then Zaire, a land he thought of as the “capital of music in Africa.” He returned to Canada several months later with a reinvigorated sense of creative possibility and the idea that his skill as a builder, with “material passing through my hands,” might help him construct a music career. In 1997, he released “Mantra,” his most renowned—and most Canadian—composition, a twenty-odd-minute recording of a compressor cooling a hockey rink. I dug it up on YouTube, and found it reminiscent of listening to breaking waves on an old CB radio. Dusted magazine called it a “deeply moving piece of mechanical mysticism, like a lawnmower on acid.” It’s “kind of a classic,” he told me.

The Babel Table was a defiantly acoustic instrument, generating its sound through the manipulation of compressed air. Given Jean-François’s training as a civil engineer and builder, the instrument’s mechanics made a certain amount of sense; I imagined it as a kind of sonically resonant HVAC system. It was originally made for a stage show devised to teach children about marine life, and Jean-François had bestowed lighthearted nicknames on the Babel Table’s various sounds and controls. The chirps were the product of his “Insects,” tiny metal pipes covered with latex membranes; the agitated ululations stemmed from a different vibrating membrane that Jean-François called the “Diva.” When the entire machine was brought into service, it sounded sometimes like the din of heavy machinery, sometimes like a chorus of jungle fauna.

It’s no insult to Jean-François’s ingenuity to admit that it was difficult to imagine the Babel Table as the harbinger of a new aural reality, forging a bold, uncharted path into the musical hinterlands—or really, to imagine it doing much beyond soundtracking a lesson for schoolchildren about jellyfish or phytoplankton. But that’s an almost impossible standard, to be fair: How many genuinely new instruments, over the past century, over the past two centuries, have attained an enduring place in musical culture anyway, let alone popular musical culture, which, to my mind at any rate, has struggled to evolve much at all over the past few decades? And at a moment when more or less any sound, any effect, is already at our digital fingertips, doesn’t the likelihood that any particular new instrument might hold the key to our musical future grow vanishingly small? In such a world, who needs new instruments? To dub the Guthman the “jam session of the future” was, in this sense, almost laughably quixotic.

And yet, the promise that I might descry that future—that I might, decades from now, be able to smugly and persistently remind my children that I was there the night the now-ubiquitous Babel Table was first introduced to the public’s collective ear—was difficult to resist. If this was indeed the “jam session of the future,” I wanted to be there for the jamming.

In 1841, Adolphe Sax brought a newly invented instrument to the Brussels Industrial Exhibition. By then, the prolific Belgian tinkerer had already made a lasting contribution to the history of musical tech with his fundamental revision of the bass clarinet. (“Compared with this instrument,” remarked the conductor François-Antoine Habeneck, “the old clarinet is a monstrosity.”) Sax was also working on a whole suite of saxhorns, the nineteenth-century ancestors of modern-day flügelhorns and euphoniums. But despite it all, Sax had grown bitter that he had as yet been unable to place first at any of the previous exhibitions he had attended; he was hoping that this would finally be his breakthrough. Unfortunately for Sax, when he carelessly left his instrument unattended, a still-unidentified rival is suspected of kicking it angrily across the room, mangling it so badly that Sax was forced to withdraw it from competition. This, according to not completely unfounded historical conjecture, was the first saxophone.

Along with the sousaphone and the steel drum, the saxophone is one of only a few acoustic instruments invented over the past two centuries to have enjoyed any sort of broad popularity, at least in Western music. (The sax, importantly, still lacks an official role in the standard symphony orchestra.) Subsequent innovations, of course, have largely been of the electronic variety. Chief among them have been the electric guitar and the synthesizer, innovations that, whatever their monumental impact, are nevertheless what are known as “augmented” instruments—technical modifications of recognizable, traditional predecessors.1 Other recent arrivals, like the sampler or the drum machine, have been more radically disruptive, dispensing with inherited notions of performance entirely and transforming the act of playing into something more akin to coding. McPherson, the professor from Imperial College London, suggested that some of the most significant instruments of our era might be things we wouldn’t typically conceive of as such: the digital audio workstation—think Ableton, Pro Tools—or even the modern recording studio.

All of which is to say that, with the nature of instruments having grown so abstract, and our available sonic palette having expanded so broadly as to be functionally infinite, the wholesale invention of an instrument has become an increasingly tricky business, if not a totally anachronistic-seeming one. And yet the Guthman has been going strong since 2009, when what was formerly a fusty piano competition endowed by the Georgia Tech alumnus Richard Guthman transformed into a showcase for instrument inventors, in a bid to tie the school’s music program to its international high-tech reputation. New instruments, the organizers decided, would be judged according to three criteria, by which entries are still evaluated. First, musicality: How creatively do the gestures involved in playing the instrument relate to the sound? How rich is the sound itself? Second, design: How elegant does the instrument look? And lastly, engineering: How well—and how cleverly—is the instrument constructed?

A chief motivating idea behind the competition was that newly conceived instruments need a live audience if they are to have any hope of breaking through to popular consciousness. Jason Freeman, the former chair of Georgia Tech’s music school and one of the original forces behind the Guthman’s overhaul, recalled the example of Robert Moog, whose pioneering synthesizer had been used in studio settings for years before a version of his invention first appeared onstage. It wasn’t until audiences saw it being played with their own eyes that his synth took off commercially. Freeman sees many new instruments as being trapped in Moog-like stasis; the Guthman offers them a chance to break free.

In the years since its revamp, the Guthman has attracted an impressive roster of judges, from the Seventies downtown doyenne Laurie Anderson to the cerebral turntablist DJ Spooky and the magnificently coiffed jazz-fusion guitar god Pat Metheny.2 The names of past winners and finalists, on the other hand, are a bit less familiar, at least to anyone lacking a comprehensive knowledge of the more obscure corners of contemporary musical arcana.

A short but exemplary list:

Keith Baxter, an engineer and patent lawyer who took first place in 2023 for his Zen Flute, a MIDI controller whose pitch is manipulated by the shape of the performer’s mouth and which Baxter likens to a “mouth theremin.”

Úlfur Hansson, the Icelandic inventor of the 2021 winner, the Segulharpa, a wooden, disk-shaped electromagnetic harp that resembles a Viking shield and is played by means of touch sensors embedded in the wood itself.

Subhraag Singh, a musician who in 2017 won with the Infinitone, a saxophonelike woodwind capable of playing microtones by use of a set of trombonelike slides. He has since debuted a set of helpful synthesizer plug-ins to mimic the instrument; its website claims that it offers a “wormhole to music light-years ahead.”

Dániel Váczi and Tóbiás Terebessy, a pair of Hungarian inventors whose Glissotar—a woodwind operated by magnetized ribbon rather than keys or holes, thus allowing for uninterrupted glissando—took home first prize in 2022 and now retails for about $3,000 via Saxophones Ltd.

Leon Gruenbaum, an experimental New York–based composer who was awarded third place in 2011 for the Samchillian Tip Tip Tip Cheeepeeeee, a modified ergonomic keyboard whose keys correspond to musical intervals rather than tones. In the years since, Gruenbaum has used the Samchillian in a variety of musical projects, one of which was described by the late great critic Greg Tate as “stochastically Krunk.”

To be clear, I love this stuff. But even the most dedicated, socially maladjusted experimental music fanatic would be forgiven for lacking familiarity with these instruments. Neither does it seem a stretch to say that none of them have meaningfully altered the trajectory of modern music. And yet the Guthman has produced some undeniable mainstream successes, of a sort. The OP-1 synthesizer, a 2014 Guthman finalist, has been used by Thom Yorke, Diplo, and Animal Collective, while Ryan Gosling can be seen wailing mercilessly on the Seaboard Grand keyboard, a descendant of another finalist, in the film La La Land.3 Even the Samchillian Tip Tip Tip Cheeepeeeee has had a brush with modest fame, having been featured on three solo albums by the Living Colour guitarist Vernon Reid.

But many of these past victorious instruments, while often exceedingly clever or even awe-inspiring from a technical standpoint, were nevertheless based firmly in established traditions of instrument engineering. The origins of the basic keyboard many of them riffed on lie with the hydraulis, or Greek water organ of the third century bc, and is thought to have assumed its modern seven-white-keys-five-black-keys form in 1361. The six-string guitar—another popular template for Guthman entrants—dates to 1779. Much of the “augmented” variety, less of the ex nihilo new.

So while winning the Guthman might garner a bit of press, even the kind that could result in a big-screen, Gosling-adjacent appearance (no small deal, to be sure), the variations on keyboards or guitars that tend to dominate the Guthman don’t seem likely to herald a wholesale musical sea change any time soon. Aspiring young musicians aren’t exactly lining up to take Infinitone lessons, as far as I know, and I didn’t happen to notice any Segulharpae the last time I wandered into a Guitar Center. But this didn’t necessarily mean that I wouldn’t behold some kind of complete upending of the musical world as we know it. After all, who could have foreseen the coming of the saxhorns?

On Friday, the opening day of the competition, the inventors had set up shop onstage at the Ferst Center for the Arts, in preparation for their demos and subsequent critical interrogations. It turned out that Jean-François, who had been assigned a slot in the far right corner, was something of an outlier as the only acoustic entrant. Among his electronic competitors was Thomas, a foppish, rail-thin twentysomething with a penchant for provocatively unbuttoned silk shirts, and who spoke nonchalantly about Gaussian functions, Faraday’s law of induction, and something called the hysteresis curve. His entry, the Lorentz Violin, which resembled a compound hunting bow, was effectively a reimagined Hammond organ—producing sound with a series of rotating gear-shaped “tonewheels.” It was a brilliant device, but one that had difficulty swiftly or cleanly navigating between discrete notes. “It does its best to get there on time,” Thomas said. “But sometimes it misses.”

Standing nearby was Max Addae, a flattopped Oberlin and MIT grad from New York who was exhibiting his VocalCords, three parallel rubber strings stretched across a wooden frame. He likened it to a game of cat’s cradle. Each string governed a particular sonic attribute as one sang: there was a “harmonic” string, a “rhythmic” string, and a “timbral” string. A singer could modulate his voice by stretching or pulling on individual or multiple strings during a performance. “The singing voice,” Max observed, “is one of few musical instruments that typically does not involve touch-mediated interaction.”

At the center of the stage was Kat Mustatea, a self-described “transmedia playwright” who was exhibiting the BodyMouth, a series of sensors strapped to the extremities of two dancers that would produce distinct phonemes according to the dancers’ positions, in a kind of acoustically rendered game of Twister. Through precisely controlled movements, the dancers would, Kat explained, be able to “speak” to the audience, though the tech didn’t seem to have achieved full fluency. Kat’s demo consisted of her speaking words into a microphone, with the dancers positioning their bodies in such a way as to attempt to replicate those words. Alone, Kat intoned. The dancers contorted themselves accordingly: ohh eee enn owooooooo.

The entrant with the glitziest PR was, without a doubt, Anthony Dickens, a charming industrial designer from Somerset, England. (The first time I tried to connect with him over Zoom, our connection was lousy on account of his wandering around what seemed to be a sheep pasture.) Before the competition was under way, he had already extensively promoted his offering, the Circle Guitar. Its website and social-media pages boasted slick designs and a glowing testimonial from Ed O’Brien, the lesser known of Radiohead’s two guitarists: “Occasionally you see or get to play something that makes you think in a totally different way,” O’Brien is quoted as saying. “This is an extraordinary new guitar and I’ve already put an order in to buy the first one.”

The Circle Guitar’s gimmick was a rotating wheel embedded in its body and furnished with sixteen removable magnetic plectrums, programmable to run at any speed between 30 and 250 rpm. It was, in a sense, the augmented instrument par excellence, designed in pursuit of what Anthony called, not un-evocatively, the “never-ending strum.” He had an enviably grandiose sense of the instrument’s future—he described his vision to me as “rock gods running around stadiums.” The Circle Guitar, he said, in a phrase I couldn’t quite grasp but nevertheless admired, aspired to the status of a “hyperinstrument.”

A few of the other contestants came bearing inventions that, while perfectly fun, seemed unlikely to win. A Spanish composer named Santi Vilanova had flown over with the Sonograf, a twelve- by eight-inch screen on which a performer could doodle with a pen or place found objects. The resulting patterns, providing a kind of alternative musical notation, would then be transformed into sound in real time, a process Santi called “image sonification.” It had been developed for elementary school musical curricula in Spain, for which it seemed well suited.

Perhaps the greatest long shot was the Bone Conductive Instrument, a wooden polyhedron designed to be held snugly to the breast, with the side of one’s face resting atop it, almost like an infant. “Bertie,” as its inventor, Pippa Kelmenson, was fond of calling it, was controlled by sliding analog sensors along two vertical tracks. The novelty of the project was that its sonic frequencies could be felt through the musician’s jaw as they played, hearing through the skull, more or less, rather than the outer ear, in a phenomenon known as, well, “bone conduction.” Bertie was pitched principally toward the hearing-impaired community, so while its ambition to accessibility was admittedly pretty laudable, the whole thing did feel a bit niche, even putting aside the fact that Bertie suffered from persistent mechanical malfunctions, to the degree that almost every time I saw Pippa over the course of the competition, she was bent over a half-disassembled Bertie, desperately engaged in some form of life-saving voltaic surgery. The winning entrant, I suspected, would at the very least need to work.

One of the more surprising aspects of the Guthman was that only a single instrument relied on any form of artificial intelligence. This was Thales, the invention of a bubbly Italian named Nicola Privato. Thales consisted of what resembled two hockey pucks, each containing a magnetometer and a series of magnets, and which produced sonic outputs principally through the interaction of their magnetic fields. (In a neat twist, Thales can also respond to any other magnetic field or object that happens to be around. At the Guthman, Nicola demonstrated this with a fiberboard tablet, into which he had installed more magnets, and onto which he had inscribed what he explained to me was an ancient Icelandic spell.)

At first, it was somewhat difficult to discern what AI had to do with this at all. Thales, Nicola patiently explained, operates via a particular species of generative AI called neural audio synthesis, which produces fresh sounds based on whatever dataset it happens to be trained on. (In this particular case, Nicola had used recordings of “magnets being thrown against each other,” as well as some choral music.) What this means in practice is that someone playing Thales, and using a dataset for the first time, will have no idea what sounds they’ll be producing, or how their use of the controllers will affect those sounds, until they’re in the very act of performing. The instrument has to be relearned, more or less, each time the model is trained on a new dataset. Nicola likened this process to baking a cake for the first time: “Sometimes you get it right, sometimes you get it wrong, but you need to taste it afterward to figure out if it actually worked.”

This use of AI bore little resemblance to the boogeyman scraping copyrighted material for the raw data necessary to generate “new” tracks with simple textual inputs, and to spit out something that, while often clumsy and uncanny, is quite recognizably and plausibly a “song,” thus threatening to put a generation of songwriters permanently out of business. On the first day of the competition, Bosko Kante and I loaded up an app called Suno and, with only a few simple keywords, managed to produce a toe-tapping jazz number whose scatting vocalist sang Guthman’s praises with only a few stray lyrical hiccups—“Competition’s fierce, talent’s in the air / Guthman’s the place where magic fills the square.” 4 “It’s not bad,” acknowledged Bosko.

No one at the Guthman could muster much enthusiasm about the Sunos of the world and the economic model they represented, but there was plenty of excitement for other, stranger ways in which AI might be put to musical use. Gil Weinberg, a Georgia Tech professor who helped found the modern Guthman, has been leveraging AI in his work for decades. Back in 2005, he invented Haile, an anthropomorphic robot drummer capable of “listening” to music in real time and playing an accompanying beat. More recently, he has been refining Shimon, an improvising robot that sings, dances, and plays marimba with human partners. By engineering robots capable of collaborating, so to speak, with human players, Gil is less interested in making them “sound like humans” than in leveraging both the sophisticated algorithms with which his robots “listen” to their human partners and their uncontested mechanical abilities (Shimon has four arms) to create music that is different in kind from anything one might produce without high-tech aid. Gil described his robotic ambitions thusly: “Listen like a human, play like a machine.”

Thales, Nicola’s more modest AI-powered creation, was what is sometimes known as a “composed” instrument: one whose sound production is independent from the gestures involved in playing it. Thales, for instance, doesn’t transmit discrete audio signals, but rather data that can take on a virtually infinite number of sonic shapes. This severing might explain why I heard so many Guthman candidates describe their creations as “controllers” or “interfaces.”

Case in point was Yuan, a “data-driven instrument” that was the invention of Chi Wang, a professor of music at Indiana University. Yuan was a set of several modified tambourines tricked out with motion and temperature sensors so as to respond to any number of different inputs. Watching Chi perform, tracing wide arcs with the tambourines, or manipulating them on a pulley system she had devised, I had difficulty fully distinguishing her act from choreography, or some kind of interpretive dance. The sound, it seemed, was subservient to the act of making it.

The principle behind XEKI (“eXperimental expressive keyboard instrument”), the creation of a three-man team from New York University, was not dissimilar. Resembling something between a keytar and a trumpet, XEKI was designed to be hoisted on one shoulder and controlled by a series of keys, a moveable grip, and the particular orientation of the XEKI in space, each of which modulated a different sonic parameter. Humans are “multi-modal,” the aptly named Orpheas Kofinakos of the XEKI team told me. (Orpheas was an unmistakable presence at the Guthman, sporting a head of hair that can only be described as Metheny-esque.) We want to not only hear sound, he explained, we want to see it. Nor can these two things be neatly decoupled. So experimenting with the expressivity of instruments—the very motions and actions by which they produce sound—is, for Orpheas, a way of generating fundamentally new species of music, even if the song remains the same.

On Saturday morning, the contestants were sequestered for rehearsal with their assigned accompanists in buildings that otherwise housed the university’s Center for Relativistic Astrophysics, the Photonics Research Group, and the Office of Radiological Safety.

The challenges in figuring out just how to properly accompany an instrument that no one has yet accompanied were myriad. Santi was at pains to make the Sonograf’s unique form of shape-based musical notation legible to the clarinetist he would be performing with. (“I’ll be starting with some stars,” he explained, not entirely helpfully.) Max struggled to find a way to prevent his ethereal VocalCords from being overwhelmed by the performance of his assigned vibraphonist, and Nicola was trying to make sure that the sounds produced by his percussionist partner—by means of “microtonal bowls,” some magnetic spheres, and a ring of keys—were sufficiently “creepy.”

Other contestants faced a mismatch in genre. Anthony and his Circle Guitar were paired with Milk+Sizz, a Grammy-winning Atlanta production duo. Their idea was to pair the Circle Guitar’s chordal drone with a club-ready track of their own devising, which, I feared, might risk complicating Anthony’s guitar-god dreams somewhat:

Because we’re young and beautiful

Champagne flights

We’re living in the moment . . .

Nevertheless, this didn’t seem to bother Anthony, who relished the prospect of a club-style “drop,” whereupon the Circle Guitar’s full and literally revolutionary potential would be unleashed on the world in one singular, epiphanic moment.

Then there were the technical problems: the Lorentz Violin was still plagued by inconsistent intonation, one of Max’s three VocalCords had to be replaced after having snapped during its demo the previous day, and Bertie’s temperamental battery was acting up again, though you wouldn’t know any of this from the backstage atmosphere of unrelenting camaraderie and mutual encouragement. Contestants spoke enthusiastically with one another about “latent space” and other concepts more suggestive of a computer science conference than a musical competition. The atmosphere was what I suspect chess camp must feel like, the happy sense that one is meeting future pen pals.

Nicola to Max: “This is so close to being played at gigs! . . . Is it on GitHub?”

Kat to Nicola: “Would you be down to jam?”

Pippa to Santi: “I didn’t know there were people like us in Spain!”

The auditorium that night, though only two-thirds full for the evening’s performances, seemed equally enthusiastic. A well-dressed man seated near me, who said that he played “many instruments” but “mainly guitar and flute,” asked me to take several photos of him in front of the stage, which he then set about furiously texting out to his friends. In the aisle to my right, a young man who I gathered was a member of the esteemed Guthman clan gushed enthusiastically about the press presence, by which he undoubtedly meant the CBS News crew on hand (for whose benefit my own schedule had been constantly and annoyingly tweaked over the course of the weekend).

Eventually, Ángel Cabrera, the president of Georgia Tech, took the stage for some opening remarks. I had spoken to him earlier in the day, and had listened to him both ruminate on the innate human need for musical expression and, somewhat less poetically, compare the Guthman to the television program Shark Tank. “How many times have you watched Star Wars?” he asked the crowd. This year’s competition, he promised, “is going to put the Star Wars canteen [sic] to shame.” 5 After a few more remarks, Cabrera headed offstage, the lights dimmed, and the showcase began.

Things, for the most part, went beautifully. The Babel Table was fittingly paired with a bassoonist who had tricked out her own instrument with a number of effects pedals and seemed to share an instinctive musical language with Jean-François, and Nicola succeeded in manifesting full-on haunted-cathedral vibes with Thales. Santi and his clarinetist partner had apparently agreed on what a “star” signified, and he deftly deployed a number of twigs and leaves, as well as a cigarette lighter, on the Sonograf’s screen to the audience’s great delight. (For his efforts, Santi would earn the People’s Choice award.) Max channeled his inner Arthur Russell and delivered a genuinely lovely, reverb-drenched aria with the aid of his VocalCords, and the BodyMouth dancers crept asymptotically closer to the borders of intelligibility.

The show was not without its missteps. Unsurprisingly, the Circle Guitar didn’t quite sync properly with Milk+Sizz’s party track, and the much-anticipated drop landed with a less than earth-shattering impact, though, in keeping with the song’s theme of “living in the moment,” no one seemed to mind. Nor did anyone seem to care that the Lorentz Violin never did quite manage to stay in tune with its piano partner, producing, if only accidentally, a quivering, hesitant aspect that I found sort of moving.

The Bone Conductive Instrument, though, simply failed to work, and Pippa had to retreat backstage for emergency repairs. (“Bertie’s battery died,” she later explained.) The show had to go on, but in the end, Pippa reemerged, a revived Bertie in her hands, and delivered a performance that, if somewhat musically underwhelming, could not have been more impeccably timed and dramatically satisfying.

Soon after, a member of the Guthman family came onstage to deliver the judges’ verdict: third place would go to Nicola’s Thales, second to Jean-François’s Babel Table, and first (deservedly, as far as I was concerned) to Max’s VocalCords. Everyone applauded, everyone seemed happy, everyone seemed to feel that musical justice had been served, if not quite that we had gazed collectively into an infallible crystal ball.

It turned out that the jam session of the future differed more in appearance than in sound. The content had been overtaken by the form; performance preceded essence. Instruments had given way to controllers, interfaces. Augmentation was the default position.

There was something that felt a little sad about this state of affairs, maybe, but there was also something invigorating in its orientation toward live experience. It was as if, in introducing the new modes of performance their instruments enabled, the competitors were actually validating a much older one. To watch Max delicately handle the VocalCords, or to see Orpheas raise the XEKI triumphantly to the ceiling as it crescendoed, was to be reminded, in ways that were alternatingly a little ridiculous and profound, of the singular power of the stage. Even if none of the entrants was destined to be the next Robert Moog, I could certainly see why Freeman had earlier brought up his example. He must have known, long before I did, that listening to the future of music unfold might not be half as interesting as watching it.

Close
“An unexpectedly excellent magazine that stands out amid a homogenized media landscape.” —the New York Times
Subscribe now

Debug