Close Menu
The AI Book
    Facebook X (Twitter) Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook X (Twitter) Instagram
    The AI Book
    Daily AI News

    Meta and Google news adds fuel to the open-source AI fire

    22 May 2023No Comments5 Mins Read

    [ad_1]

    Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


    The open-source AI debate is getting even hotter in Big Tech, thanks to recent headlines from Google and Meta.

    On Tuesday evening, CNBC reported that Google’s newest large language model (LLM) PaLM 2 “uses nearly five times more text data for training than its predecessor,” even though when it announced the model last week, Google said it was smaller than the earlier PaLM but uses a more efficient “technique.” The article emphasized that “the company has been unwilling to publish the size or other details of its training data.”

    While a Google spokesperson declined to comment on the CNBC reporting, Google engineers were, to put it mildly, pissed off by the leak and eager to share their thoughts. In a now-removed tweet, Dmitry (Dima) Lepikhin, a senior staff software engineer at Google DeepMind, tweeted: “whoever leaked PaLM2 details to cnbc, sincerely fuck you!”

    And Alex Polozov, a senior staff research scientist at Google, also weighed in with what he called a “rant,” pointing out that the leak sets a precedent for increased siloing of research.

    Event

    Transform 2023

    Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

     

    Register Now

    I’m so pissed I’m going to take this rant public.
    WTF are you trying to accomplish with leaks? Is it just the ego thrill of importance?
    100s of Googlers work *hard* to keep publishing & scientific collabs alive. And you just make precedent to silo it all.https://t.co/0o6iDj4PsJ

    — ?? Alex Polozov (@Skiminok) May 17, 2023

    Lucas Beyer, a Google AI researcher in Zurich, agreed, tweeting: “It’s not the token count (which I don’t even know if it’s correct) that upsets me, it’s the complete erosion of trust and respect. Leaks like this lead to corpspeak and less openness over time, and an overall worse work/research environment. And for what? FFS.”

    Not in response to the Google leak — but in coincidental timing — Meta chief AI scientist Yann LeCun did an interview focusing on Meta’s open-source AI efforts with the New York Times, which published this morning.

    The piece describes Meta’s release of its LLaMA large language model in February as “giving away its AI crown jewels” — since it released the model’s source code to “academics, government researchers and others who gave their email address to Meta [and could then] download the code once the company had vetted the individual.”

    “The platform that will win will be the open one,” LeCun said in the interview, later adding that the growing secrecy at Google and OpenAI is a “huge mistake” and a “really bad take on what is happening.”

    In a Twitter thread, VentureBeat journalist Sean Michael Kerner pointed out that Meta has “actually already gave away one of the most critical AI/ML tools ever created — PyTorch. The foundational stuff needs to be open/and it is. After all, where would OpenAI be without PyTorch?”

    But even Meta and LeCun will only go so far in terms of openness. For example, Meta had made LLaMA’s model weights available for academics and researchers on a case-by-case basis — including Stanford for its Alpaca project — but those weights were subsequently leaked on 4chan. That leak is what actually allowed developers around the world to fully access a GPT-level LLM for the first time, not the Meta release, which did not include releasing the LLaMA model for commercial use.

    VentureBeat spoke to Meta last month about the nuances of its take on the open- vs. closed-source debate. Joelle Pineau, VP of AI research at Meta, said in our interview that accountability and transparency in AI models is essential.

    “More than ever, we need to invite people to see the technology more transparently and lean into transparency,” she said, explaining that the key is to balance the level of access, which can vary depending on the potential harm of the model.

    “My hope, and it’s reflected in our strategy for data access, is to figure out how to allow transparency for verifiability audits of these models,” she said.

    On the other hand, she said that some levels of openness go too far. “That’s why the LLaMA model had a gated release,” she explained. “Many people would have been very happy to go totally open. I don’t think that’s the responsible thing to do today.”

    LeCun remains outspoken on AI risks being overblown

    Still, LeCun remains outspoken in favor of open-source AI, and in the New York Times interview argued that the dissemination of misinformation on social media is more dangerous than the latest LLM technology.

    “You can’t prevent people from creating nonsense or dangerous information or whatever,” he said. “But you can stop it from being disseminated.”

    And while Google and OpenAI may become more closed with their AI research, LeCun insisted he — and Meta — remain committed to open source, saying “progress is faster when it is open.”

    VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



    [ad_2]

    Source link

    Previous ArticleOpenAI introduces ChatGPT app for iOS, bringing most popular AI chatbot to iPhones
    Next Article Skyflow launches ‘privacy vault’ for building LLMs
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment
    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.