[ad_1]
Even if you didn’t watch last weekend’s episode of Saturday Night Live, you still probably saw it. You may already even know what “it” I’m talking about: Timothée Chalamet, and other similarly-dressed cast members, booty-shaking in tiny little red undies. He was, the sketch goes, “an Australian YouTube twink turned indie pop star and model turned HBO actor Troye Sivan being played by an American actor who can’t do an Australian accent.” Chalamet and his cohort were Troye Sivan Sleep Demons, and they’d been haunting straight women all over the place. It was a funny bit and, ironically, the least nightmarish Sivan impression to come out this week.
On Thursday, Google DeepMind announced Lyria, which it calls its “most advanced AI music generation model to date” and a pair of “experiments” for music making. One is a set of AI tools that allow people to, say, hum a melody and have it turn into a guitar riff, or transform a keyboard solo into a choir. The other is called Dream Track, and it allows users to make 30-second YouTube Shorts using the AI-generated voices and musical styles of artists like T-Pain, Sia, Demi Lovato, and—yes—Sivan almost instantly. All anyone has to do is type in a topic and pick an artist off a carousel, and the tool writes the lyrics, produces the backing track, and sings the song in the style of the musician selected. It’s wild.
My freak-out about this isn’t a fear of a million fake Troy Sivan’s haunting my dreams; it’s that the most creative work shouldn’t be this easy, it should be difficult. To borrow from A League of Their Own’s Jimmy Dugan, “It’s supposed to be hard. If it wasn’t, everyone would do it. The hard is what makes it great.” Yes, asking a machine to make a song about fishing in the style of Charli XCX is fun (or at least funny), but Charli XCX songs are good because they’re full of her attitude, something that comes through even when she writes for other people, like she did on Icona Pop’s “I Love It.” To borrow again, from a sign hoisted during the Hollywood writers strike, “ChatGPT doesn’t have childhood trauma.”
Not that these tools have no use. They are, more than anything, meant to help cultivate ideas and, for Dream Track, “test new ways for artists to connect with their fans.” It’s about making new experimental noises for YouTube, rather than Billboard chart-toppers. As Lovato, who, along with other artists allowed DeepMind to use their music for this project, said in a statement, AI is upending how artists work and “we need to be a part of shaping what that future looks like.”
Google’s latest AI music toy comes at a tricky time. Generative AI creates something of a digital minefield when it comes to copyright, and YouTube, which Google owns, has been trying to handle both an influx of AI-made music and the fact that it has agreements with labels to pay when artists’ work shows up on the platform. A few months ago, when “Heart on My Sleeve”—an AI-generated song by “Drake” and “The Weeknd”—went viral, it was ultimately pulled from several streaming services following complaints from the artists’ label, Universal Music Group.
But even if, say, the manager of Johnny Cash’s estate isn’t seeking to stop AI-generated covers of “Barbie Girl,” the technology still presents a conundrum for artists: They can either work with companies like Google to create AI tools using their music, make their own tools (like Holly Herndon and Grimes have), push back and see whether copyright law applies to music made from AI models trained on their work, or do nothing. It’s a question seemingly every artist is thinking about right now, or at least getting asked about.
[ad_2]
Source link