Close Menu
The AI Book
    Facebook X (Twitter) Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook X (Twitter) Instagram
    The AI Book
    Daily AI News

    Meet the Pranksters Behind Goody-2, the World’s ‘Most Responsible’ AI Chatbot

    9 February 2024No Comments2 Mins Read

    [ad_1]

    Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved. The recent outbreak of Taylor Swift deepfakes on Twitter turned out to stem from an image generator released by Microsoft, which was one of the first major tech companies to build up and maintain a significant responsible AI research program.

    The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate. Some developers have alleged that OpenAI’s ChatGPT has a left-leaning bias and have sought to build a more politically neutral alternative. Elon Musk promised that his own ChatGPT rival, Grok, would be less biased that other AI systems, although in fact it often ends up equivocating in ways that can be reminiscent of Goody-2.

    Plenty of AI researchers seem to appreciate the joke behind Goody-2—and also the serious points raised by the project—sharing praise and recommendations for the chatbot. “Who says AI can’t make art,” Toby Walsh, a professor at the University of New South Wales who works on creating trustworthy AI, posted on X.

    “At the risk of ruining a good joke, it also shows how hard it is to get this right,” added Ethan Mollick, a professor at Wharton Business School who studies AI. “Some guardrails are necessary … but they get intrusive fast.”

    Brian Moore, Goody-2’s other co-CEO, says the project reflects a willingness to prioritize caution more than other AI developers. “It is truly focused on safety, first and foremost, above literally everything else, including helpfulness and intelligence and really any sort of helpful application,” he says.

    Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. “It’s an exciting field,” Moore says. “Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it.”

    Goody-2 via Will Knight



    [ad_2]

    Source link

    Previous ArticlePerplexity partners with Vercel, opening AI search to developer apps
    Next Article OpenAI CEO Sam Altman says Apple Vision Pro is ‘2nd most impressive tech since the iPhone’
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment
    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2026 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.