Stretching the Bounds of Reality With AI Sensorium at ODC
By Lou Fancher
Fake reality: It’s a thing.
And it has been for decades, according to composer Timothy Russell. Long before machine learning and artificial intelligence (AI) began spewing out the internet’s deepfakes, cheap fakes, robo-calls, bots, and, most egregiously, nude fakes, there was Thomas Edison’s 1887 phonograph and the wax cylinder, the first medium for recording and reproducing sound.
“What is digital and what is not?” Russell asks in an interview about the new work he is composing for the world premiere on Feb. 21 of AI Sensorium, a multidisciplinary work created by Kinetech Arts and presented by ODC Theater. “Even if I was recording birds and playing it for you, it’s still a digital artifact of reality. Back in Thomas Edison’s day, they’d hide the (phonograph) speakers behind a curtain and ask if people could tell if it was an actual orchestra playing in a concert hall or a recording. So that has been an interesting question for a long time.”
Indeed, so interesting that Kinetech artistic co-directors and ODC Theater resident artists Daiane Lopes da Silva and Weidong Yang have diagrammed as Sensorium’s centerpiece this intersection of dance, science, technology, captured sound, and data gathering in contemporary life. With choreography primarily by Lopes da Silva and Wang providing interactive visual design, five dancers investigate issues of control, privacy, skewed reality, surveillance of physical bodies, and ultimately, the digital world’s impact on humanity and culture.
The work’s three movements divide under an overall arc that escalates in intensity. Act I is a solo in which Lopes da Silva improvises using a sharp, angular set movement vocabulary. Electromyography and a contact mic measure the electric currents in her muscles and gather additional data she collects by touching audience members’ bodies with the mic. Once amplified, the electric current produces a soundscape and displays light onstage. Yang says the data stream setup is “a monument for the fusion of humans and their ultimate technological creation, AI.”
Act II employs more machine learning, with speech turned into granular components and the words reorganized into unsettling narratives. Dancers in green wigs perform balletic movements and apply tape to their bodies to amplify the distortions. Lopes da Silva says, “Actual deepfake technology is used to transpose one dancer’s face onto another, convincing and yet horrifying. Projection mapping creates the illusion of fake news coming out of a person’s butt. It is funny, yet alarming. We try to hold the truth, but we cannot.”
Act III introduces a cult-like “Digital Garden of Eden.” Here, the contact mic’s output signal is amplified to power four sound exciters that cause enormous, two-meter-tall metal panels to vibrate, resonate, and cast shimmering pools of light. “We used to worship ancestors and nature. Now, we worship technology,” says Lopes da Silva. While studying how AI teaches itself to walk, they created a score that forces the dancers to use unconventional locomotion to move. “We also use lots of bowing, kneeling, and electrical movements. In Act III, there is a power dynamic. We think we control our creation, but in reality, we are controlled by what we have created.”
The technology was developed in collaboration with Patricia Alessandrini, who teaches in the Department of Music at Stanford University and serves as Sensorium’s sound designer and interactive consultant. Yang holds a master’s degree in Computer Science and a Ph.D. in physics from the University of Oregon and says, “The transformation of data (sourced from the body) to sound and light is done in a way to comment on the oddity of how technology, a human creation, is progressively dominating our life, thought, and ideology.”
Composer Russell, who launched his professional career as a 15-year-old percussionist playing with rock bands in and around Chicago, has a degree in percussion performance from the University of Wisconsin, where he is now music director for the dance department, and an MFA in music improvisation from Mills College in Oakland. “My background is in improvisation. It integrates in the spaces I’m creating. I call them spaces more so than pieces of music with linear structure. I’m making spaces where dancers can explore.”
Striving for an otherworldly sonic aura, but one grounded in reality he took himself on the kind of “acoustic ecology” tour he uses in the classroom. “I do sound walks. All you do is go out and walk around with the intention of listening. The world around us is full of amazing sounds.”
For the Digital Garden section, Russell asked himself, “What does it sound like to be in a park?” He collected the sound of wind in the trees, the low hum of a city, birds singing, people talking. “I then go into my software and say, “Let’s make some bees; this is people talking; birds come in. They’re digital representations of those sounds. What does it mean to sound digital? That’s getting increasingly hard to define.”
As is defining truth — and what constitutes our body and who sets its limits. “It’s crazy to think the tech has gotten so real we don’t know what it means to be fake,” says Russell. Lopes da Silva, reflecting on messages Sensorium transmits about AI and machine learning says, “As the piece progresses, there is a sense of loss of humanity, and a desire to go back to the time when we had ownership of our own lives.” Later, she adds, “I know there are things we cannot change, but if we are aware of what is happening with AI, we can make better choices about how we want to spend our time together on this Earth.”