[ad_1]
When Aparna Pappu, vice president and general manager of Google Workspace, spoke at Google I/O on May 10, she laid out a vision for artificial intelligence that helps users wade through their inbox. Pappu showed how generative AI can whisper summaries of long email threads in your ear, pull in relevant data from local files as you salsa together through unread messages, and dip you low to the ground as it suggests insertable text. Welcome to the inbox of the future.
While the specifics of how it’ll arrive remain unclear, generative AI is poised to fundamentally alter how people communicate over email. A broader subset of AI, called machine learning, already performs a kind of safety dance long after you’ve logged off. “Machine learning has been a critical part of what we’ve used to secure Gmail,” Pappu tells WIRED.
A few, errant clicks on a suspicious email can wreak havoc on your security, so how does machine learning help deflect phishing attacks? Neil Kumaran, a product lead at Google who focuses on security, explains that machine learning can look at the phrasing of incoming emails and compare it to past attacks. It can also flag unusual message patterns and sniff out any weirdness emanating from the metadata.
Machine learning can do more than just flag dangerous messages as they pop up. Kumaran points out that it also can be used to track the people responsible for phishing attacks. He says, “At the time of account creation, we do evaluations. We try to figure out, ‘Does it look like this account is going to be used for malicious purposes?’” In the event of a successful phishing attack on your Google account, AI is involved with the recovery process as well. The company uses machine learning to help decide which login attempts are legit.
“How do we extrapolate intelligence from user reports to identify attacks that we may not know about, or at least start to model the impact on our users?” asks Kumaran. The answer from Google, like the answer to many questions in 2023, is more AI. This instance of AI is not a flirty chatbot teasing you with long exchanges late into the night; it’s a burly bouncer kicking out the rabble-rousers with its algorithmic arms crossed.
On the reverse side, what’s instigating even more phishing attacks on your email inbox? I’ll give you one guess. First letter “A,” last letter “I.” For years, security experts have warned about the potential for AI-generated phishing attacks to overwhelm your inbox. “It’s very, very hard to detect AI with the naked eye, either through the dialect or through the URL,” says Patrick Harr, CEO of SlashNext, a messaging security company. Just like when people use AI-generated images and videos to create fairly convincing deepfakes, attackers may use AI-generated text to personalize phishing attempts in a way that’s difficult for users to detect.
Multiple companies focused on email security are working on models and using machine-learning techniques in an effort to further protect your inbox. “We take the corpus of data that’s coming in and do what’s called supervised learning,” says Hatem Naguib, CEO of Barracuda Networks, an IT security firm. In supervised learning, someone adds labels to a portion of the email data. Which messages are likely to be safe? Which ones are suspicious? This data is extrapolated to help a company flag phishing attacks with machine learning.
It’s a valuable aspect of phishing detection, but attackers remain on the prowl for ways to circumvent protections. A persistent scam about a made-up Yeti Cooler giveaway evaded filters last year with an unexpected kind of HTML anchoring.
Cybercriminals will remain intent on hacking your online accounts, especially your business email. Those who utilize generative AI may be able to better translate their phishing attacks into multiple languages, and chatbot-style applications can automate parts of the back-and-forth messages with potential victims.
Despite all of the possible phishing attacks enabled by AI, Aparna Pappu remains optimistic about the continued development of better, more refined security protections. “You’ve lowered the cost of what it takes to potentially lure someone,” she says. “But, on the flip side, we’ve built up greater detection capabilities as a result of these technologies.”
[ad_2]
Source link