Luke DuBois and the NYU Ability Project help give performers in the opera Sensorium Ex a cutting-edge way to communicate

Two actors, one in a wheelchair, mid performance in studio

A scene from Sensorium Ex, an operatic performance that synthesizes artificial intelligence, disability, and the arts.

Even under the best of circumstances, it’s difficult to stage a new opera; the creative and administrative challenges can seem overwhelming, no matter how talented and experienced the composer and librettist.

And what if your opera’s protagonist were a nonverbal person with multiple disabilities, and you were intent upon casting a performer whose own condition closely mirrors that of the character? In that case, you would need to ignore skeptics who assume that the idea of a nonverbal opera performer is an impossibility and turn to tech experts able to help.

That’s exactly what composer Paola Prestini did when creating Sensorium Ex, which tells the story of Kitsune, a young, non-verbal man, and his mother, Mem, as they navigate a dystopian world in which corporate greed and lack of empathy figure largely. Prestini reached out to Luke DuBois, who co-chairs NYU Tandon’s Department of Technology, Culture, and Society; co-directs the school’s Integrated Design & Media (IDM) program; and serves on the faculty of the NYU Ability Project, a multi-school initiative with a focus on assistive technology and accessible design.

 

 

That contact kicked off a five-year period of collaboration. In the first stage, Lauren Race, an Ability Project researcher and designer, studied how people felt about the assistive technology then readily available to them. Many used augmentative and alternative communication (AAC) devices that allowed them to enter words into the device, which then emits a voice reading them.

Michael Coney, then an IDM graduate student and Experience Designer at Arup, assisted with the research along with peers Apoorva Avadhana and Spandita Sarmah. He explains: “We interviewed multiple people with cerebral palsy about what having a voice meant to them and how we could be most supportive through collaborative design.”

Subjects mentioned a common peeve: the voices produced by their devices are robotic, with no hint of personality or emotion. Users, particularly those without the mobility to gesture, had no good way to express sarcasm, frustration, exhilaration, or joy. “Vocal identity is complex and personal; it cannot be extracted and digitized out of its organic context,” the researchers concluded.

DuBois, who has a large network of fellow technologists, artists, and musicians, began calling in reinforcements to work on the problem.

Prestini had already identified two actors to portray Kitsune in rotation:

Kader Zioueche, who has cerebral palsy, and Jakob Jordan, who has autism and apraxia, a condition that severely impacts speech. Dubois tapped Mark Cartwright, who had worked for several years in NYU’s Music and Audio Research Lab and Tandon’s Center for Urban Science & Progress. By then the head of the Sound Interaction and Computing Lab at the New Jersey Institute of Technology (NJIT), Cartwright signed on to the project and began recruiting students and colleagues to help.

Using a combination of a text-to-speech synthesizer and a specialized neural vocoder (a type of neural network used in speech synthesis to convert audio to low-dimensional acoustic features and back again), his team recorded the natural vocalizations Jordan and Zioueche made and inferred, via AI, how to transform the libretto into expressive speech in the vocal style of the actors.

It was a massive undertaking: the AI team ultimately included not just NJIT members but researchers from the University of Illinois Urbana-Champaign, and several from Northwestern, such as Max Morrison, whom Cartwright credits for establishing the foundations of the technology.

The results, however, were stunning: while the voices were still somewhat synthetic, they were recognizably the actors’ own. Emotion ran high in the room when the technology was initially demonstrated, but there was yet another issue that needed to be addressed: while the actors could prerecord their portions of the libretto, written by renowned poet Brenda Shaughnessy (herself the mother of a nonverbal child), they would need a way to control the flow of their words and add emotion and emphasis.

Artist and engineer Eric Singer, an IDM researcher-in-residence whose areas of expertise include the use of sensors and robotics in multimedia systems, stepped in. (Among his early projects was the League of Electronic Musical Urban Robots or LEMUR.)

Because the actors were capable of some rudimentary hand motions, Singer was able to leverage a device of his own creation, the Ther’minator (a play on the name of the Theremin, an electronic musical instrument controlled without physical contact), which transforms Light Detection and Ranging (LiDAR) sensing signals into distance data and transmits them using WiFi. The actors thus became able to control nuances of their operatic speech in real time. (The low-cost sensors used, Singer explains, are similar to the technology used in self-driving cars to sense distances to obstacles.)

Sensorium Ex, which was featured in PBS News Hour, had its premiere in early 2025 at the Common Senses Festival in Omaha, and Prestini is working to bring it to additional cities and stages soon. She feels its success could convince others to create similarly inclusive art performances. (Might even a blockbuster like Hamilton, for example, one day be mounted with nonverbal performers?)

The stage, DuBois believes, might even be just the beginning. "We wanted to use Sensorium Ex as a test case for the technology, which we’ve very intentionally made available on an open-source basis, so anyone in the non-verbal or minimally speaking community can use it,” he says.

Shaughnessy, whose acclaimed poetry collection Our Andromeda includes a long entry in which she discusses her son’s disabilities, holds out hope for that day. “As human beings, we all have varying levels of verbal ability, and many people need help to some degree,” she says. “Working on Sensorium Ex has been an epic journey, and it would be wonderful if it culminates in technology that gives non-verbal people a voice.”