[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
A couple of weeks ago I attended a dinner hosted by Emad Mostaque, founder and CEO of Stability AI, in advance of a NYC meetup celebrating open source AI.
As about eight of us chatted over wine and apps, I asked Mostaque for his thoughts on the March open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4, which he had signed along with Elon Musk, Steve Wozniak and Yoshua Bengio. He smiled and confided that he was working on another letter that would have even more high-profile signatories.
When I asked to see it, he shook his head and said it was embargoed, but that we’d see it soon. He went on to tell us, off the record, about various celebrities Stability AI was working with, as well as, on the record, about how he had been asked to testify at the recent U.S. Senate hearing with Open AI CEO Sam Altman, but that he had decided to simply submit a letter with comments to the subcommittee.
Sure enough, last Tuesday the 22-word Statement on AI Risk was released by the Center for AI Safety (which receives over 90% of its funding by a nonprofit run by a prominent couple in the Effective Altruism movement), which said that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Mostaque played the hype man, tweeting that “we got (almost) the entire AI crew together” and that it was a “minimum viable existential risk statement.”
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
Meanwhile, at the same dinner, I also met Kyunghyun Cho, a prominent AI researcher and an associate professor at New York University. He stayed fairly quiet, other than chatting about his recent travels, and declined to walk with us afterward to the meetup, because, he said, “I have a life.”
As it turned out, however, Cho had some deep thoughts about the Statement on AI Risk and Sam Altman’s recent Senate testimony, saying that he was “disappointed by a lot of this discussion about existential risk” that was “sucking the air out of the room.”
I published those thoughts in a Q&A last week that received attention — tens of thousands of views and hundreds of Twitter shares. Meta chief AI scientist Yann LeCun called Cho’s comments a “sensible attitude.” Google DeepMind chief scientist Jeff Dean said he “pretty much” agreed with Cho’s take. Melanie Mitchell, professor at the Santa Fe Institute, called out Cho’s “very good, thoughtful comments.”
Between the response to Cho’s Q&A, and my previous article (which Cho cited) in which Cohere for AI’s Sara Hooker called AI doomerism focused on x-risk a “fringe” topic, had the AI doom narrative, so prevalent over the past few months, somehow jumped the shark?
As it turns out, the strangely close embrace between AI doom and hype has at least one person at its center: Mostaque.
Just seven months ago, Mostaque’s Stability AI celebrated attaining unicorn ($1 billion-plus valuation) status with what The New York Times called a “coming-out party” at the San Francisco Exploratorium that felt like a “return to pre-pandemic exuberance.”
But behind the scenes, Mostaque’s story was unraveling. A Forbes investigation published on Sunday revealed that he lied about his background, his achievements, and his partnerships. He is reported to have overstated his credentials, inflated his hedge fund experience, misled investors and customers, and exaggerated an Amazon deal. He also took credit for Stable Diffusion’s success, while downplaying the role of his co-founders and employees. The Forbes article came just two months after Semafor reported in April that Stability AI was “on shaky ground,” burning through cash and seeking an overhaul of management.
I reached out to Mostaque for comment after the article was published — and was surprised at the casual response. “It was very sad, we have an official statement about it soon,” he said, but added that he was busy on Twitch announcing the finalists/winners in the Diffuse Together AI animation challenge with legendary singer-songwriter Peter Gabriel, in which participants submitted animated AI-generated videos inspired by and set to the Gabriel’s music. Existential risk and investigative piece be damned, it seemed.
“Onto the winners, these three are awesome!” he said. I watched a few of the live Twitch broadcasts. They were amazing, but I couldn’t help wondering how Mostaque was going to hype his way out of the Forbes investigation. He agreed to answer some questions after the broadcast to clear the air and expand on his complaints about the Forbes report.
I published the Q&A, which quickly got its own traffic traction while Cho’s interview continued to circulate widely. Mostaque didn’t let up, continuing to ride that thin line between doom and hype, making sure to let deep learning pioneer Andrew Ng know on Twitter that he “was the one of the few AI CEOs to sign both [AI risk] letters but agree with you the net benefit will be immense.”
No matter what happens with Mostaque going forward, the close-knit relationship between AI doom and AI hype will certainly continue. OpenAI CEO Sam Altman, for example, is in the middle of a world tour/charm offensive that is currently making stops in the Middle East. He is playing both to the doomers concerned about runaway AGI (after all, OpenAI’s stated priority is to “ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity”) as well as the hype masters who don’t want too much regulation related to current, disruptive AI impacts (OpenAI also wants to sell its products, after all).
But there’s a hopeful part of me that takes solace in the popularity of Cho’s widely-shared Q&A. While an investigation into one of generative AI’s biggest hype men and doom-letter signers is certainly media fodder, it seems like many others — including me — are more interested in the “sensible,” “thoughtful” perspectives of AI scientists on issues related to the development of AI technology.
The thin line between AI doom and hype may become ever finer, but I’d like to bet on the strength of the silent AI majority that does not subscribe to either extreme — which is becoming ever louder.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link