Close Menu
The AI Book
    Facebook X (Twitter) Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook X (Twitter) Instagram
    The AI Book
    Daily AI News

    Generative AI Is Playing a Surprising Role in Israel-Hamas Disinformation

    30 October 2023No Comments3 Mins Read

    [ad_1]

    “Given the creativity humans have showcased throughout history to make up (false) stories and the freedom that humans already have to create and spread misinformation across the world, it is unlikely that a large part of the population is looking for misinformation they cannot find online or offline,” the paper concludes. Moreover, misinformation only gains power when people see it, and considering the time people have for viral content is finite, the impact is negligible.

    As for the images that might find their way into mainstream feeds, the authors note that while generative AI can theoretically render highly personalized, highly realistic content, so can Photoshop or video editing software. Changing the date on a grainy cell phone video could prove just as effective. Journalists and fact checkers struggle less with deepfakes than they do with out-of-context images or those crudely manipulated into something they’re not, like video game footage presented as a Hamas attack.

    In that sense, excessive focus on a flashy new tech is often a red herring. “Being realistic is not always what people look for or what is needed to be viral on the internet,” adds Sacha Altay, a coauthor on the paper and a postdoctoral research fellow whose current field involves misinformation, trust, and social media at the University of Zurich’s Digital Democracy Lab.

    That’s also true on the supply side, explains Mashkoor; invention is not implementation. “There’s a lot of ways to manipulate the conversation or manipulate the online information space,” she says. “And there are things that are sometimes a lower lift or easier to do that might not require access to a specific technology, even though AI-generating software is easy to access at the moment, there are definitely easier ways to manipulate something if you’re looking for it.”

    Felix Simon, another one of the authors on the Kennedy School paper and a doctoral student at the Oxford Internet Institute, cautions that his team’s commentary is not seeking to end the debate over possible harms, but is instead an attempt to push back on claims gen AI will trigger “a truth armageddon.” These kinds of panics often accompany new technologies.

    Setting aside the apocalyptic view, it’s easier to study how generative AI has actually slotted into the existing disinformation ecosystem. It is, for example, far more prevalent than it was at the outset of the Russian invasion of Ukraine, argues Hany Farid, a professor at the UC Berkeley School of Information.



    [ad_2]

    Source link

    Previous ArticleJoe Biden’s Sweeping New Executive Order Aims to Drag the US Government into the Age of ChatGPT
    Next Article ZeroRISC raises $5M to deliver commercial OpenTitan-based cloud security for chips
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment
    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2026 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.