The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    Daily AI News

    How AI May Be Used to Create Custom Disinformation Ahead of 2024

    1 August 2023No Comments3 Mins Read

    [ad_1]

    “If I want to launch a disinformation campaign, I can fail 99 percent of the time. You fail all the time, but it doesn’t matter,” Farid says. “Every once in a while, the QAnon gets through. Most of your campaigns can fail, but the ones that don’t can wreak havoc.”

    Farid says we saw during the 2016 election cycle how the recommendation algorithms on platforms like Facebook radicalized people and helped spread disinformation and conspiracy theories. In the lead-up to the 2024 US election, Facebook’s algorithm—itself a form of AI—will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.

    “We’ve been pretty well tricked by very low-quality content. We are entering a period where we’re going to get higher-quality disinformation and propaganda,” Starbird says. “It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”

    What can be done about this problem? Unfortunately, only so much. Diresta says people need to be made aware of these potential threats and be more careful about what content they engage with. She says you’ll want to check whether your source is a website or social media profile that was created very recently, for example. Farid says AI companies also need to be pressured to implement safeguards so there’s less disinformation being created overall.

    The Biden administration recently struck a deal with some of the largest AI companies—ChatGPT maker OpenAI, Google, Amazon, Microsoft, and Meta—that encourages them to create specific guardrails for their AI tools, including external testing of AI tools and watermarking of content created by AI. These AI companies have also created a group focused on developing safety standards for AI tools, and Congress is debating how to regulate AI.

    Despite such efforts, AI is accelerating faster than it’s being reined in, and Silicon Valley often fails to keep promises to only release safe, tested products. And even if some companies behave responsibly, that doesn’t mean all of the players in this space will act accordingly.

    “This is the classic story of the last 20 years: Unleash technology, invade everybody’s privacy, wreak havoc, become trillion-dollar-valuation companies, and then say, ‘Well, yeah, some bad stuff happened,’” Farid says. “We’re sort of repeating the same mistakes, but now it’s supercharged because we’re releasing this stuff on the back of mobile devices, social media, and a mess that already exists.”

    [ad_2]

    Source link

    Previous ArticleGenerative AI is quickly infiltrating organizations, McKinsey reports
    Next Article A New Attack Impacts ChatGPT—and No One Knows How to Stop It
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.