ProducerAI, music generator, joins Google Labs

ProducerAI, music generator, joins Google Labs

3 Min Read

Google announced that the generative AI music tool ProducerAI will join Google Labs.

Supported by The Chainsmokers, ProducerAI enables users to input natural language requests like “make a lofi beat” to create music. It utilizes Google DeepMind’s Lyria 3 model, which can transform text and even image inputs into audio outputs.

Google mentioned that Lyria 3’s features would also be integrated into the Gemini app, but ProducerAI offers users a more collaborative interaction with the AI model, according to Elias Roman, Senior Director of Product Management at Google Labs.

“ProducerAI has offered me new creative avenues,” Roman stated in a blog. “I’ve tried new genre fusions, crafted personalized birthday songs for loved ones, and designed custom workout playlists for friends and myself.”

[embedded content]

Google also revealed that Grammy-winning artist Wyclef Jean utilized the Lyria 3 model and Music AI Sandbox on his recent track “Back From Abu Dhabi.”

Jeff Chang, Director of Product Management at Google DeepMind, explained in a video, “This isn’t just about pressing a button repeatedly. It’s a curated process where you select something useful.”

Jean shared how he used Google’s tools to quickly add a flute sound to a pre-recorded track.

Techcrunch event

Boston, MA
|
June 9, 2026

“What I want everyone to realize is that we’re in an era where human creativity is essential,” Jean stated in the video. “Humans have a soul, whereas AI has vast amounts of information.”

AI in the music industry

Some musicians have strongly opposed AI tools in music creation, as these tools are often trained on copyrighted material without permission. Hundreds of artists, including Billie Eilish and Katy Perry, signed a letter in 2024 urging tech companies not to diminish human creativity through AI music tools.

Recently, a group of music publishers sued the AI company Anthropic for $3 billion, accusing them of illegally downloading over 20,000 copyrighted songs, including sheet music, lyrics, and compositions. Previously, Anthropic was also required by court order to settle $1.5 billion with authors for pirating books for AI training.

Conversely, some artists see this technology as a way to enhance audio quality. Paul McCartney used AI-powered noise reduction to improve a low-quality John Lennon demo, resulting in the Grammy-winning “new” Beatles track, “Now and Then.”

AI music tools like Suno have produced synthetic music that has achieved success on Spotify and Billboard. Telisha Jones from Mississippi used Suno to turn her poetry into the viral R&B song “How Was I Supposed To Know,” eventually securing a record deal with Hallwood Media reportedly worth $3 million.

The legal status of using copyrighted works for training remains unclear. Federal Judge William Alsup ruled that training on copyrighted data is legal, but pirating it is not.

You might also like