Innovation Monitor: The Augmented Musician — from the Summit 2020 Panel

Innovation Monitor: The Augmented Musician — from the Summit 2020 Panel

A dive into the world of technology-augmented sound

View this email in your browser

Welcome to this week’s Innovation Monitor.

We’ve previously covered how technology changes the way we experience music — particularly, the virtualization of the concert experience. This week, we’re going to dive into how new tech can augment the music creation process, the subject of an amazing panel from our NYC Media Lab Summit 2020 (the full video will be up soon).

We’ll explore how technologist-musicians are utilizing affordable or open-source tools and off-the-shelf parts to open up the music creation experience to aspiring creators. Let’s kick things off with a band called Living Colour.

As always, we wish you and your community safety, calm and solidarity as we support each other through this unprecedented time. Thank you for reading!

All best,
Erica Matsumoto The Augmented Musician If you haven’t heard Living Colour before, check out their Type MV below — they’re pretty brilliant and have awesome energy.

The band’s guitarist, Vernon Reid, recently joined The American Society of Composers, Authors and Publishers (ASCAP) Lab (see the Lab promo below) as an Artist in Residence, and was also featured on a panel at the recent NYC Media Lab Summit.

Reid was joined by ASCAP’s SVP of strategy and business development Brooke Eplee, NYU professor Pablo Ripollés, and music technologist and artist David Azar. The latter two spoke about their respective projects during the ASCAP Lab’s 11-week open university challenge for graduate students and faculty in creative tech, music, design, and technology.

Reid kicked off the first answer, to Eplee’s “How are you?” with, “I’m doing as well as anyone in a dystopian hellscape can.” I think we can all relate.

Reid is sort of a conduit for the analog perspective of the 80s and the digitally-augmented world of today. He’s been involved in media and technology throughout his career, including the 2018 VR experience Ashe ’68. But one thing stays consistent, no matter where you are in music history — “the happy oops,” as Eplee called it.

Referencing a chat between Guillermo del Toro and Alec Baldwin at Tribeca — where del Toro discussed his creative process — Reid brings up the importance of experimenting, being less concerned with the efficacy of the end result, and “creating happy accidents and collisions.” Reid continues:

“That’s how hip hop started, that’s how many different things came into being, like the [Roland TR-808] drum machine… a product that [initially] failed. Roland was trying to sell the 808 and it was really expensive and they weren’t moving them. They were dropping them on [48th street] for a [few hundred dollars] and that made them affordable for [musicians like] Afrika Bambaataa & The Soulsonic Force to pick up… and they changed the course of music history.”

Key here is the affordability and creative potential that the TR-808 presented. “Drum machines like the 808 spawned the era of ‘bedroom producers’ such as Rick Rubin (who used an 808 in his NYU dorm) and Pete Rock,” wrote The Verge in its history of the TR-808.

While the iPad isn’t singled out for enabling “bedroom producers,” it should be. “There was one artist who was nominated for a Grammy who did all the engineering on their iPad,” said Reid. He was referencing Henny Tha Bizness, an award-winning producer who has produced for Jay Z, Ice Cube, Kendrick Lamar, and many more, and who has a YouTube channel full of videos discussing beat making on an iPad.

In a Jonathan Morrison video from 2018, Henny said that “technology’s always going to advance, and to the person who says, ‘it’s not possible to be professional on an iPad, in any type of creative, everyday lifestyle,’ I say ‘OK, you’re going to see because two years down the line, you’re going to be the guy looking to jump in when everybody’s 10 miles ahead of you.’”

And this theme of affordability, accessibility, and creative potential brings us to Azar and Ripollés’ projects for ASCAP Lab, which they summed up in their respective panel discussions.

Azar discussed his team’s work on the beginner-friendly Madd sampler, a recording device made from off-the-shelf and open-source components that features a prominent silicon pad. The pad changes your sample based on how you moved your hand over it (see the segment at 24:40 in the uploaded video). One of Azar’s driving motivations behind the Madd sampler — besides making an intuitive, novel music interface — was figuring out how musicians can become technologists and vice versa.

Ripollés, a researcher with a focus on (apparently) disparate fields such as language learning, music, and reward and memory, worked with his team on CHILLERs — the Computer Human Interface for Live Labeling of Emotional Responses. “Part of what I do is to study the reward system in our brain…. Music is something everyone finds rewarding… but it’s very particular to each of us. I’m baffled by the fact that the same pattern of sounds can be pleasurable for one person and totally opposite for another.”

“The origin of CHILLERs is we really wanted to understand how it’s possible that a pattern of sounds has this huge effect, and why it’s different from person to person. We know how to study behavioral or physiological reaction in the lab, but it’s very anticlimatic — you’re in front of a computer or connected to some electrodes, or inside an MRI machine. We can do that. But I’ve always was bothered because this isn’t the way we listen to music…. I wanted to know how we react to music in the real world. A year ago I had an idea, I woke up and got it — we were going to measure goosebumps.”

The CHILLERs DIY device uses a camera to detect goosebumps, since moving around would fuddle regular sensor output. The moment the device detects goosebumps, an LED lights up and indicates somebody is experiencing the peak of an emotion. Here’s the Rasberry Pi 0-based wearable in action (see the segment at 37:19 in the uploaded video):

You can read more about Azar and Ripollés’ projects in this PR release. We also recommend the following supplementary pieces, which explore AI’s role in the music creation process:

  • Google Magenta artist and technologist Vibert Thio designed Lo-Fi Player, an interactive music game that uses two machine learning models running in the background: “One, tucked away in the radio, generates new melodies when clicked on; the other, hidden in the TV, interpolates between two melodies to create something that sounds a little bit like both.” Read more about it here.
  • Nylon featured a great dive into AI’s role in the future of music, discussing OpenAI’s music creation model Jukebox, speaking with Holly Herndon on her PROTO album (you really need to see her Eternal video), and overviewing tools like Popgun, “a startup with products that include an app that children can use to create songs with AI.”
  • The Nylon piece above mentions Eurovision’s AI Song Contest, but Bloomberg covers it in greater detail. What’s especially noteworthy is the varying degrees of curation the teams employed for the competition: “The teams used varying degrees of intervention — from including human producers and vocalists and curating their programs’ output to letting the software take control with as little of their interference as possible. The French team Algomus & Friends combined human composition and edited versions of AI-created music and lyrics for a song that sounds more like a typical Eurovision entry.”

Experiments with AI in Music Creation

We also recommend the following supplementary pieces, which explore AI’s role in the music creation process:

  • Google Magenta artist and technologist Vibert Thio designed Lo-Fi Player, an interactive music game that uses two machine learning models running in the background: “One, tucked away in the radio, generates new melodies when clicked on; the other, hidden in the TV, interpolates between two melodies to create something that sounds a little bit like both.” Read more about it here.
  • Nylon featured a great dive into AI’s role in the future of music, discussing OpenAI’s music creation model Jukebox, speaking with Holly Herndon on her PROTO album (you really need to see her Eternal video), and overviewing tools like Popgun, “a startup with products that include an app that children can use to create songs with AI.”
  • The Nylon piece above mentions Eurovision’s AI Song Contest, but Bloomberg covers it in greater detail. What’s especially noteworthy is the varying degrees of curation the teams employed for the competition: “The teams used varying degrees of intervention — from including human producers and vocalists and curating their programs’ output to letting the software take control with as little of their interference as possible. The French team Algomus & Friends combined human composition and edited versions of AI-created music and lyrics for a song that sounds more like a typical Eurovision entry.”

This Week in Business History

October 13th, 1893: The melody to the song “Happy Birthday To You” is copyrighted by the Hill Sisters

In keeping with this week’s music theme, in 1893 two sisters registered the copyright to a song, “Good Morning To All” which would be the melody for the ubiquitous birthday tune that remains incredibly difficult and awkward to sing, yet you will hear countless times each year. The copyright scenario ended up creating a century-long conflict which remains murky in terms of exactly who owns the song. This On the Media episode captures it quite well.

The original Good Morning To All





This email was sent to <<Email Address>>
why did I get this? unsubscribe from this list update subscription preferences
NYC Media Lab · 370 Jay Street, 3rd floor · Brooklyn, New York 11201 · USA

NYC Media Lab connects university researchers and NYC’s media tech companies to create a new community of digital media & tech innovators in New York City.