Close Menu
The AI Book
    Facebook X (Twitter) Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook X (Twitter) Instagram
    The AI Book
    Daily AI News

    AI pioneers Hinton, Ng, LeCun, Bengio amp up x-risk debate

    31 October 2023No Comments5 Mins Read

    [ad_1]

    VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


    In a series of online articles, blog posts and posts on X/LinkedIn over the past few days, AI pioneers (sometimes called “godfathers” of AI) Geoffrey Hinton, Andrew Ng, Yann LeCun and Yoshua Bengio have amped up their debate over existential risks of AI by commenting publicly on each other’s posts. The debate clearly places Hinton and Bengio on the side that is highly concerned about AI’s existential risks, or x-risks, while Ng and LeCun believe the concerns are overblown, or even a conspiracy theory Big Tech firms are using to consolidate power.

    It’s a far cry from the united front of AI positivity they have shown over the years since leading the way on the deep learning ‘revolution’ that began in 2012. Even a year ago, LeCun and Hinton pushed back in interviews with VentureBeat against Gary Marcus and other critics who said deep learning had “hit a wall.” 

    Hinton responded to claims that x-risk is Big Tech conspiracy

    But today, Hinton, who quit his role at Google in May to speak out freely about the risks of AI, posted on X about recent comments from computer scientist Andrew Ng, who did pioneering work in image recognition after co-founding Google Brain in 2011.

    Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat.

    — Geoffrey Hinton (@geoffreyhinton) October 31, 2023

    Hinton was responding to Ng’s comments in a recent interview with the Australian Financial Review that Big Tech is “lying” about some AI risks to shut down competition and trigger strict regulation.

    Event

    AI Unleashed

    An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

     

    Learn More

    And today, in an issue of his newsletter The Batch, Ng wrote that “My greatest fear for the future of AI is if overhyped risks (such as human extinction) lets tech lobbyists get enacted stifling regulations that suppress open-source and crush innovation.”

    LeCun and Ng say tech leaders are exaggerating existential risks

    LeCun, who is chief AI scientist at Meta, responded to Ng’s comments with a recent post saying “Well, at least *one* Big Tech company is open sourcing AI models and not lying about AI existential risk.” He added in a response that “Lying is a big word that I haven’t used. I think some of these tech leaders are genuinely worried about existential risk. I think they are wrong. I think they exaggerate it. I think they have an unwarranted superiority complex that leads them to believe that 1. It’s okay if *they* do it, but not okay if the populace does it. 2. Superhuman AI is just around the corner and will have all the characteristics of current LLMs.”

    LeCun also responded to Hinton’s post:

    You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models.

    This will inevitably lead to bad outcomes in the medium term.

    — Yann LeCun (@ylecun) October 31, 2023

    While Hinton responded to one of LeCun’s:

    Let’s open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK.

    — Geoffrey Hinton (@geoffreyhinton) October 31, 2023

    Bengio says AI risks are ‘keeping me up at night’

    Meanwhile, just last week Hinton and Bengio — who received the 2018 ACM A.M. Turing Award (often referred to as the “Nobel Prize of Computing“), together with Geoffrey Hinton and Yann LeCun, for their work on deep learning — joined with 22 other leading AI academics and experts to propose a framework for policy and governance that aims to address the growing risks associated with artificial intelligence. 

    The paper said companies and governments should devote a third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts.

    Just a few days ago, Bengio wrote an opinion piece for Canada’s Globe and Mail, in which he said that as ChatGPT and similar LLMs continued to make giant leaps over the past year, his “apprehension steadily grew.”  He said that major AI risks are a grave source of concern for me, keeping me up at night, especially when I think about my grandson and the legacy we will leave to his generation.”

    X-risk debate does not diminish friendship, say ‘godfathers’ of AI

    The debate does not diminish the long friendship between the quartet. Andrew Ng posted a photo of himself at a recent party celebrating Hinton’s retirement from Google, while LeCun did the same — posting a photo of himself with Hinton and Bengio with a caption saying: “A reminder that people can disagree about important things but still be good friends.”

    VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



    [ad_2]

    Source link

    Previous ArticleAMD’s Q3 revenues hit $5.8B, up 4% as PC CPUs grow again
    Next Article Aethir wants the decentralized cloud to speed up gaming and AI
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment
    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2026 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.