[ad_1]
The U.S. Government made two big announcements this week to help drive development of safe AI, including the creation of the U.S. Artificial Intelligence Safety Institute, or AISI, on Wednesday and the creation of a supporting group called the Artificial Intelligence Safety Institute Consortium today.
The new AI Safety Institute, or AISI, was established to help write the new AI rules and regulations that President Joe Biden ordered with its landmark executive order signed in late October. It will operate under the auspice of the National Institute of Standards and Technology (NIST) and will be led by Elizabeth Kelly, who was named the AISI director yesterday by the Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. Elham Tabassi will serve as chief technology officer.
“The Safety Institute’s ambitious mandate to develop guidelines, evaluate models, and pursue fundamental research will be vital to addressing the risks and seizing the opportunities of AI,” Kelly, a special assistant to the president for economic policy, stated in a press release. “I am thrilled to work with the talented NIST team and the broader AI community to advance our scientific understanding and foster AI safety. While our first priority will be executing the tasks assigned to NIST in President Biden’s executive order, I look forward to building the institute as a long-term asset for the country and the world.”
The NIST followed the creation of the AISI with today’s launch of the Artificial Intelligence Safety Institute Consortium, or AISIC. According to the NIST’s guidelines, the new group is tasked with bringing together AI creators, users, academics, government and industry researchers to “establish the foundations for a new measurement science in AI safety,” according to the NIST’s press release unveiling the AISIC.
The AISIC launched with 200 members, including many of the IT giants developing AI technology, like Anthropic, Cohere, Databricks, Google, Huggingface, IBM, Meta, Microsoft, OpenAI, Nvidia, SAS, and Salesforce, among others. You can view the full list here.
The NIST lists several goals for the AISIC, including: creating a “sharing space” for AI stakeholders; engage in “collaborative and interdisciplinary research and development,” understanding AI’s impact on society and the economy; create evaluation requirements to understand “AI’s impacts on society and the US economy”; recommend approaches to facilitate “the cooperative development and transfer of technology and data”; help federal agencies communicate better; and create tests for AI measurements.
“NIST has been bringing together diverse teams like this for a long time. We have learned how to ensure that all voices are heard and that we can leverage our dedicated teams of experts,” Locascio said at a press briefing today. “AI is moving the world into very new territory. And like every new technology, or every new application of technology, we need to know how to measure its capabilities, its limitations, its impacts. That is why NIST brings together these incredible collaborations of representatives from industry, academia, civil society and the government, all coming together to tackle challenges that are of national importance.”
One of the AISIC members, BABL AI, applauded the creation of the group. “As an organization that audits AI and algorithmic systems for bias, safety, ethical risk, and effective governance, we believe that the Institute’s task of development a measurement science for evaluating these systems aligns with our mission to promote human flourishing in the age of AI,” BABL AI CEO Shea Brown stated in a press release.
Lena Smart, the CISCO for MongoDB, another AISIC member, is also supportive of the initiative. “New technology like generative AI can have an immense benefit to society, but we must ensure AI systems are built and deployed using standards that help ensure they operate safely and without harm across populations,” Smart said in a press release. “By supporting the USAISIC as a founding member, MongoDB’s goal is to use scientific rigor, our industry expertise, and a human-centered approach to guide organizations on safely testing and deploying trustworthy AI systems without stifling innovation.”
AI security, privacy, and ethical concerns were simmering on the backburner until November 2022, when OpenAI unveiled ChatGPT to the world. Since then, the field of AI has exploded, and its potential negatives have become the subject of intense debate, with some prominent voices declaring AI a threat to the future of humans.
Governments have responded by accelerating plans to regulate AI. European rule makers in December approved rules for the AI Act, which is on pace to go into law next year. In the United States, President Joe Biden signed an executive order in late October, signifying the creation of new rules and regulations that US companies must follow with AI tech.
Related
[ad_2]
Source link