[ad_1]
On Monday, the leadership of the Screen Actors Guild–American Federation of Television and Radio Artists held a members-only webinar to discuss the contract the union tentatively agreed upon last week with the Alliance of Motion Picture and Television Producers. If ratified, the contract will officially end the longest labor strike in the guild’s history.
For many in the industry, artificial intelligence was one of the strike’s most contentious, fear-inducing components. Over the weekend, SAG released details of its agreed AI terms, an expansive set of protections that require consent and compensation for all actors, regardless of status. With this agreement, SAG has gone substantially further than the Directors Guild of America or the Writers Guild of America, who preceded the group in coming to terms with the AMPTP. This isn’t to say that SAG succeeded where the other unions failed but that actors face more of an immediate, existential threat from machine-learning advances and other computer-generated technologies.
The SAG deal is similar to the DGA and WGA deals in that it demands protections for any instance where machine-learning tools are used to manipulate or exploit their work. All three unions have claimed their AI agreements are “historic” and “protective,” but whether one agrees with that or not, these deals function as important guideposts. AI doesn’t just posit a threat to writers and actors—it has ramifications for workers in all fields, creative or otherwise.
For those looking to Hollywood’s labor struggles as a blueprint for how to deal with AI in their own disputes, it’s important that these deals have the right protections, so I understand those who have questioned them or pushed them to be more stringent. I’m among them. But there is a point at which we are pushing for things that cannot be accomplished in this round of negotiations and may not need to be pushed for at all.
To better understand what the public generally calls AI and its perceived threat, I spent months during the strike meeting with many of the leading engineers and tech experts in machine-learning and legal scholars in both Big Tech and copyright law.
The essence of what I learned confirmed three key points: The first is that the gravest threats are not what we hear most spoken about in the news—most of the people whom machine-learning tools will negatively impact aren’t the privileged but low- and working-class laborers and marginalized and minority groups, due to the inherent biases within the technology. The second point is that the studios are as threatened by the rise and unregulated power of Big Tech as the creative workforce, something I wrote about in detail earlier in the strike here and that WIRED’s Angela Watercutter astutely expanded upon here.
[ad_2]
Source link