[ad_1]
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
With generative AI tools like ChatGPT proliferating across enterprises, CISOs have to strike a very difficult balance: performance gains versus unknown risks. Generative AI is delivering greater precision to cybersecurity but also being weaponized into new attack tools, such as FraudGPT, that advertise their ease of use for the next generation of attackers.
Solving the question of performance versus risk is proving a growth catalyst for cybersecurity spending. The market value of generative AI-based cybersecurity platforms, systems and solutions is expected to rise to $11.2 billion in 2032 from $1.6 billion in 2022. Canalys expects generative AI to support over 70% of businesses’ cybersecurity operations within five years.
Weaponized AI strikes at the core of identity security
Generative AI attack strategies are focused on getting control of identities first. According to Gartner, human error in managing access privileges and identities caused 75% of security failures, up from 50% two years ago. Using generative AI to force human errors is one of the goals of attackers.
VentureBeat interviewed Michael Sentonas, president of CrowdStrike, to gain insights into how the cybersecurity leader is helping its customers take on the challenges of new, more lethal attacks that defy existing detection and response technologies.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Register Now
Sentonas said that “the hacking [demo] session that [we] did at RSA [2023] was to show some of the challenges with identity and the complexity. The reason why we connected the endpoint with identity and the data that the user is accessing is because it’s a critical problem. And if you can solve that, you can solve a big part of the cyber problem that an organization has.”
Cybersecurity leaders are up for the challenge
Leading cybersecurity vendors are up for the challenge of fast-tracking generative AI apps through DevOps to beta and doubling down on their many models in development.
During Palo Alto Networks‘ most recent earnings call, chairman and CEO Nikesh Arora emphasized the intensity the company is putting into generative AI, saying, “And we’re doubling down, we’re quadrupling down to make sure that precision AI is deployed across every product of Palo Alto. And we open up the floodgates of collecting good data with our customers for them to give them better security because we think that is the way we’re going to solve this problem to get real-time security.”
Toward resilience against AI-based threats
For CISOs and their teams to win the war against AI attacks and threats, generative AI-based apps, tools and platforms must become part of their arsenals. Attackers are out-innovating the most adaptive enterprises, sharpening their tradecraft to penetrate the weakest attack vectors. What’s needed is greater cyber-resilience and self-healing endpoints.
Absolute Software’s 2023 Resilience Index tracks well with what VentureBeat has learned about how challenging it is to excel at the comply-to-connect trend that Absolute also found. Balancing security and cyber-resilience is the goal, and the Index provides a useful roadmap on how organizations can pursue that. Cyber-resilience, like zero trust, is an ongoing framework that adapts to an organization’s changing needs.
Every CEO and CISO VentureBeat interviewed at RSAC 2023 said employee- and company-owned endpoint devices are the fastest-moving, hardest-to-protect threat surfaces. With the rising risk of generative AI-based attacks, resilient, self-healing endpoints that can regenerate operating systems and configurations are the future of endpoint security.
Five ways CISOs and their teams can prepare
Central to being prepared for generative AI-based attacks is to create muscle memory of every breach or intrusion attempt at scale, using AI, generative AI and machine learning (ML) algorithms that learn from every intrusion attempt. Here are the five ways CISOs and their teams are preparing for generative AI-based attacks:
Securing generative AI and ChatGPT sessions in the browser
Despite the security risk of confidential data being leaked into LLMs, organizations are intrigued by boosting productivity with generative AI and ChatGPT. VentureBeat’s interviews with CISOs, starting at RSA and continuing this month, reveal that these professionals are split on defining AI governance. For any solution to this problem to work, it must secure access at the browser, app and API levels to be effective.
Several startups and larger cybersecurity vendors are working on solutions in this area. Nightfall AI’s recent announcement of an innovative security protocol is noteworthy. According to Genesys, Nightfall’s customizable data rules and remediation insights help users self-correct. The platform gives CISOs visibility and control so they can use AI while ensuring data security.
Always scanning for new attack vectors and types of compromise
SOC teams are seeing more sophisticated social engineering, phishing, malware and business email compromise (BEC) attacks that they attribute to generative AI. While attacks on LLMs and generative AI apps are nascent today, CISOs are already doubling down on zero trust to reduce these risks.
That includes continuously monitoring and analyzing generative AI traffic patterns to detect anomalies that could indicate emerging attacks, and regularly testing and red-teaming generative AI systems in development to uncover potential vulnerabilities. While zero trust can’t eliminate all risks, it can help make organizations more resilient against generative AI threats.
Finding and closing gaps and errors in microsegmentation
Generative AI’s potential to improve microsegmentation, a cornerstone of zero trust, is already happening thanks to startups’ ingenuity. Nearly every microsegmentation provider is fast-tracking DevOps efforts.
Leading vendors with deep AI and ML expertise include Akamai, Airgap Networks, AlgoSec, Cisco, ColorTokens, Elisity, Fortinet, Illumio, Microsoft Azure, Onclave Networks, Palo Alto Networks, VMware, Zero Networks and Zscaler.
One of the most innovative startups in microsegmentation is Airgap Networks, named one of the 20 best zero-trust startups of 2023. Airgap’s approach to agentless microsegmentation reduces the attack surface of every network endpoint, and it is possible to segment every endpoint across an enterprise while integrating the solution into an existing network with no device changes, downtime or hardware upgrades.
Airgap Networks also introduced its Zero Trust Firewall (ZTFW) with ThreatGPT, which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships.
“With highly accurate asset discovery, agentless microsegmentation and secure access, Airgap offers a wealth of intelligence to combat evolving threats,” Ritesh Agrawal, CEO of Airgap, told VentureBeat. “What customers need now is an easy way to harness that power without any programming. And that’s the beauty of ThreatGPT — the sheer data-mining intelligence of AI coupled with an easy, natural language interface. It’s a game-changer for security teams.”
Guarding against generative AI-based supply chain attacks
Security is often tested right before deployment, at the end of the software development lifecycle (SDLC). In an era of emerging generative AI threats, security must be pervasive throughout the SDLC, with continuous testing and verification. API security must also be a priority, and API testing and security monitoring should be automated in all DevOps pipelines.
While not foolproof against new generative AI threats, these practices significantly raise the barrier and enable quick threat detection. Integrating security across the SDLC and improving API defenses will help enterprises thwart AI-powered threats.
Taking a zero-trust approach to every generative AI app, platform, tool and endpoint
A zero-trust approach to every interaction with generative AI tools, apps amd platforms, and the endpoints they rely on, is a must-have in any CISO’s playbook. Continuous monitoring and dynamic access controls must be in place to provide the granular visibility needed to enforce least privilege access and always-on verification of users, devices, and the data they’re using, both at rest and in transit.
CISOs are most worried about how generative AI will bring new attack vectors they’re unprepared to protect against. For enterprises building large language models (LLMs), protecting against query attacks, prompt injections, model manipulation and data poisoning are high priorities.
Preparing for generative AI attacks with zero trust
CISOs, CIOs and their teams are facing a challenging problem today. Do generative AI tools like ChatGPT get free reign in their organizations to deliver greater productivity, or are they bridled in and controlled, and if so, by how much? Samsung’s failure to protect intellectual property is still fresh in the minds of many board members, VentureBeat has learned through conversations with CISOs who regularly brief their boards.
One thing everyone agrees on, from the board level to SOC teams, is that generative AI-based attacks are increasing. Yet no board wants to jump into capital expense budgeting, especially given inflation and rising interest rates. The answer many are arriving at is accelerating zero-trust initiatives. While an effective zero-trust framework isn’t stopping generative AI attacks completely, it can help reduce their blast radius and establish a first line of defense in protecting identities and privileged access credentials.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link