[ad_1]
In a new post this morning, Meta announced it will identify and label AI-generated content on Facebook, Instagram and Threads — though it cautioned it is “not yet possible to identify all AI-generated content.”
The announcement comes two weeks after pornographic AI-generated deepfakes of singer Taylor Swift went viral on Twitter, leading to condemnation from fans and lawmakers, as well as global headlines. It also comes as Meta comes under pressure to deal with AI-generated images and doctored videos in advance of the 2024 US elections.
Nick Clegg, president of global affairs at Meta, wrote that “these are early days for the spread of AI-generated content,” adding that as it becomes more common, “there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.” The company would “continue to watch and learn, and we’ll keep our approach under review as we do. We’ll keep collaborating with our industry peers. And we’ll remain in a dialogue with governments and civil society.”
The post emphasized that Meta is working with industry organizations like the Partnership on AI (PAI) to develop common standards for identifying AI-generated content. It said the invisible markers used for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices.
VB Event
The AI Impact Tour – NYC
We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.
Request an invite
Meta said it would label images that users post to Facebook, Instagram and Threads “when we can detect industry standard indicators that they are AI-generated.” The post added that photorealistic images created using Meta AI have been labeled since the service launched “so that people know they are ‘Imagined with AI.’
Clegg wrote that Meta’s approach “represents the cutting edge of what’s technically possible right now,” adding that “we’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks.”
Latest effort to tackle labeling AI-generated content
Meta’s announcement is the latest effort to identify and label AI-generated content through techniques such as invisible watermarks. Back in July 2023, seven companies promised President Biden they would take concrete steps to enhance AI safety, including watermarking, while in August, Google DeepMind released a beta version of a new watermarking tool, SynthID, that embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.
But so far, digital watermarks — whether visible or invisible — are not sufficient to stop bad actors. In October, Wired quoted a University of Maryland computer science professor, Soheil Feizi, who said “we don’t have any reliable watermarking at this point — we broke all of them.” Feizi and his fellow researchers examined how easy it is for bad actors to evade watermarking attempts. In addition to demonstrating how attackers might remove watermarks, they showed how it to add watermarks to human-created images, triggering false positives.
Experts say watermarks are useful, but not a ‘silver bullet’ for AI content
Margaret Mitchell, chief ethics scientist at Hugging Face, told VentureBeat in October that these invisible digital watermarks are useful, but not a “silver bullet” to identify AI-generated content.
However, she emphasized while digital watermarks may not stop bad actors, they are a “really big deal” for enabling and supporting good actors who want a sort of embedded ‘nutrition label’ for AI content.
When it comes to the ethics and values surrounding AI-generated images and text, she explained, one set of values is related to the concept of provenance. “You want to be able to have some sort of lineage of where things came from and how they evolved,” she said. “That’s useful in order to track content for consent credit and compensation. It’s also important in order to understand what the potential inputs for models are.”
It’s this bucket of watermarking users that Mitchell said she gets “really excited” about. “I think that has really been lost in some of the recent rhetoric,” she said, explaining that there will always be ways AI technology doesn’t work well. But that doesn’t mean the technology as a whole is bad.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link