[ad_1]
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Yoshua Bengio and Geoffrey Hinton, two of the so-called AI godfathers, have joined with 22 other leading AI academics and experts to propose a framework for policy and governance that aims to address the growing risks associated with artificial intelligence.
The paper said companies and governments should devote a third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts.
The proposals are significant because they come in the run-up to next week’s AI safety summit meeting at Bletchley Park in the UK, where international politicians, tech leaders, academics and others will gather to discuss how to regulate AI amid growing concerns around its power and risks.
The paper calls for special action from the large private companies developing AI and government government policymakers and regulators:
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Learn More
Here are some of the proposals:
- Companies and governments should allocate at least one-third of their AI R&D budget to ensuring safety and ethical use, comparable to their funding for AI capabilities.
- Governments urgently need comprehensive insight into AI development. Regulators should require model registration, whistleblower protections, incident reporting, and monitoring of model development and supercomputer usage.
- Regulators should be given access to advanced AI systems before deployment to evaluate them for dangerous capabilities such as autonomous self-replication, breaking into computer systems, or making pandemic pathogens widely accessible.
- Government should also hold developers and owners of “frontier AI” – the term given to the most advanced AI – legally accountable for harms from their models that can be reasonably foreseen and prevented.
- Governments must be prepared to license certain AI development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready.
Both Bengio and Hinton are renowned experts in the field of AI, and recently have increased their calls for AI safety amid mounting risk. These calls have faced pushback from another prominent AI leader, Yann Lecun, who argues that current AI risks do not need such urgent measures. While the voices calling for safety-first have been drowned out over the last couple of years as companies focused on building out AI technology, the balance appears to be shifting toward caution as new powerful capabilities emerge. Other co-authors of the paper include academic and bestselling author Yuval Noah Harari, Nobel laureate in economics Daniel Kahneman, and prominent AI researcher Jeff Clune. Last week, another AI leader, Mustafa Suleyman joined others to propose an AI equivalent to the International Panel on Climate Change (IPCC), to help shape protocols and norms.
The paper devotes a lot of its attention to the risks posed by companies that are developing autonomous AI, or systems that “can plan, act in the world, and pursue goals. While current AI systems have limited autonomy, work is underway to change this,” the paper said.
For example, the paper noted, the cutting-edge GPT-4 model offered by Open AI was quickly adapted to browse the web, design and execute chemistry experiments, and utilize software tools, including other AI models. Software programs like AutoGPT have been created to automate such AI processes, and allow AI processing to continue without human intervention.
The paper said there’s significant risk these autonomous systems could run rogue, and that there is no way to keep them in check.
“If we build highly advanced autonomous AI, we risk creating systems that pursue undesirable goals. Malicious actors could deliberately embed harmful objectives. Moreover, no one currently knows how to reliably align AI behavior with complex values. Even well-meaning developers may inadvertently build AI systems that pursue unintended goals—especially if, in a bid to win the AI race, they neglect expensive safety testing and human oversight,” the paper said.
The paper also called on research breakthroughs to address some key technical challenges in creating safe and ethical:
- Oversight and honesty: More capable AI systems are better able to exploit weaknesses in oversight and testing —for example, by producing false but compelling output;
- Robustness: AI systems behave unpredictably in new situations (under distribution shift or adversarial inputs);
Interpretability: AI decision-making is opaque. So far, we can only test large models via trial and error. We need to learn to understand their inner workings; - Risk evaluations: Frontier AI systems develop unforeseen capabilities only discovered during training or even well after deployment. Better evaluation is needed to detect hazardous capabilities earlier;
- Addressing emerging challenges: More capable future AI systems may exhibit failure modes that have so far seen only in theoretical models. AI systems might, for example, learn to feign obedience or exploit weaknesses in safety objectives and shutdown mechanisms to advance a particular goal.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link