[ad_1]
Join leaders in San Francisco on January 10 for an exclusive night of networking, insights, and conversation. Request an invite here.
The National Institute of Standards and Technology (NIST) has released an urgent report to aid in the defense against an escalating threat landscape targeting artificial intelligence (AI) systems.
The report, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” arrives at a critical juncture when AI systems are both more powerful and more vulnerable than ever.
As the report explains, adversarial machine learning (ML) is a technique used by attackers to deceive AI systems through subtle manipulations that can have catastrophic effects.
The report goes on to provide a detailed and structured overview of how such attacks are orchestrated, categorizing them based on the attackers’ goals, capabilities and knowledge of the target AI system.
VB Event
The AI Impact Tour
Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.
Learn More
“Attackers can deliberately confuse or even ‘poison’ artificial intelligence systems to make them malfunction,” the NIST report explains. These attacks exploit vulnerabilities in how AI systems are developed and deployed.
The report outlines attacks like “data poisoning,” where adversaries manipulate the data used to train AI models. “Recent work shows that poisoning could be orchestrated at scale so that an adversary with limited financial resources can control a fraction of public datasets used for model training,” the report says.
Another concern the NIST report outlines is “backdoor attacks,” where triggers are planted in training data to induce specific misclassifications later on. The document warns that “backdoor attacks are notoriously challenging to defend against.”
The NIST report also highlights privacy risks from AI systems. Techniques like “membership inference attacks” can determine if a data sample was used to train a model. NIST then cautions, “No foolproof way exists as yet for protecting AI from misdirection.”
While AI promises to transform industries, security experts emphasize the need for caution. “AI chatbots enabled by recent advances in deep learning have emerged as a powerful technology with great potential for numerous business applications,” the NIST report states. “However, this technology is still emerging and should only be deployed with abundance of caution.”
The goal of the NIST report is to establish a common language and understanding of AI security issues. This document will most likely serve as an important reference to the AI security community as it works to address emerging threats.
Joseph Thacker, principal AI engineer and security researcher at AppOmni, told VentureBeat “This is the best AI security publication I’ve seen. What’s most noteworthy are the depth and coverage. It’s the most in-depth content about adversarial attacks on AI systems that I’ve encountered.”
For now, it seems we are stuck in an endless game of cat and mouse with no end in sight. As experts grapple with emerging AI security threats, one thing is clear — we have entered a new era where AI systems will need much more robust protection before they can be safely deployed across industries. The risks are simply too great to ignore.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link