[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
A group of the world’s leading artificial intelligence (AI) experts — including many pioneering researchers who have sounded alarms in recent months about the existential threats posed by their own work — released a sharply worded statement on Tuesday warning of a “risk of extinction” from advanced AI if its development is not properly managed.
The joint statement, signed by hundreds of experts including the CEOs of OpenAI, DeepMind and Anthropic aims to overcome obstacles to openly discussing catastrophic risks from AI, according to its authors. It comes during a period of intensifying concern about the societal impacts of AI, even as companies and governments push to achieve transformative leaps in its capabilities.
“AI experts, journalists, policymakers and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” reads the statement published by the Center for AI Safety. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion.”
Luminary leaders recognize concerns
The signatories include some of the most influential figures in the AI industry, such as Sam Altman, CEO of OpenAI; Dennis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic. These companies are widely considered to be at the forefront of AI research and development, making their executives’ acknowledgment of the potential risks particularly noteworthy.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
Notable researchers who have also signed the statement include Yoshua Bengio, a pioneer in deep learning; Ya-Qin Zhang, a distinguished scientist and corporate vice president at Microsoft; and Geoffrey Hinton, known as the “godfather of deep learning,” who recently left his position at Google to “speak more freely” about the existential threat posed by AI.
Hinton’s departure from Google last month has drawn attention to his evolving views on the capabilities of the computer systems he has spent his life researching. At 75 years old, the renowned professor has expressed a desire to engage in candid discussions about the potential dangers of AI without the constraints of corporate affiliation.
Call to action
The joint statement follows a similar initiative in March when dozens of researchers signed an open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4. Signatories of the “pause” letter included tech luminaries Elon Musk, Steve Wozniak, Bengio and Gary Marcus.
Despite these calls for caution, there remains little consensus among industry leaders and policymakers on the best approach to regulate and develop AI responsibly. Earlier this month, tech leaders including Altman, Amodei and Hassabis met with President Biden and Vice President Harris to discuss potential regulation. In a subsequent Senate testimony, Altman advocated for government intervention, emphasizing the seriousness of the risks posed by advanced AI systems and the need for regulation to address potential harms.
In a recent blog post, OpenAI executives outlined several proposals for responsibly managing AI systems. Among their recommendations were increased collaboration among leading AI researchers, more in-depth technical research into large language models (LLMs), and the establishment of an international AI safety organization. This statement serves as a further call to action, urging the broader community to engage in a meaningful conversation about the future of AI and its potential impact on society.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link