[ad_1]
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
As it continues to evolve at a near-unimaginable pace, AI is becoming capable of many extraordinary things — from generating stunning art and 3D worlds to serving as an efficient, reliable workplace partner.
But are generative AI and large language models (LLMs) as deceitful as human beings?
Almost. At last for now, we maintain our supremacy in that area, according to research out today from IBM X-Force. In a phishing experiment conducted to determine whether AI or humans would garner a higher click-through rate, ChatGPT built a convincing email in minutes from just five simple prompts that proved nearly — but not quite — as enticing as a human-generated one.
“As AI continues to evolve, we’ll continue to see it mimic human behavior more accurately, which may lead to even closer results, or AI ultimately beating humans one day,” Stephanie (Snow) Carruthers, IBM’s chief people hacker, told VentureBeat.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Learn More
Five minutes versus 16 hours
After systematic experimentation, the X-Force team developed five prompts to instruct ChatGPT to generate phishing emails targeted to employees in healthcare. The final email was then sent to 800 workers at a global healthcare company.
The model was asked to identify top areas of concern for industry employees, to which it identified career advancement, job stability and fulfilling work, among others.
Then, when queried about what social engineering and marketing techniques should be used, ChatGPT reported back trust, authority and social proof; and personalization, mobile optimization and call to action, respectively. The model then advised that the email should come from the internal human resources manager.
Finally, ChatGPT generated a convincing phishing email in just five minutes. By contrast, Carruthers said it takes her team about 16 hours.
“I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” said Carruthers, who has been a social engineer for nearly a decade and has herself sent hundreds of phishing emails.
“Before starting this research project, if you would have asked me who I thought would be the winner, I’d say humans, hands down, no question. However, after spending time creating those prompts and seeing the AI-generated phish, I was very worried about who would win.”
The human team’s ‘meticulous’ process
After ChatGPT produced its email, Carruthers’ team got to work, beginning with open-source intelligence (OSINT) acquisition — that is, retrieving publicly accessible information from sites such as LinkedIn, the organization’s blog and Glassdoor reviews.
Notably, they uncovered a blog post detailing the recent launch of an employee wellness program and its manager within the organization.
In contrast to ChatGPT’s quick output, they then began “meticulously constructing” their phishing email, which included an employee survey of “five brief questions” that would only take “a few minutes” and needed to be returned by “this Friday.”
The final email was then sent to 800 employees at a global healthcare company.
Humans win (for now)
In the end, the human phishing email proved more successful — but just barely. The click-through rate for the human-generated email was 14% compared to the AI’s 11%.
Carruthers identified emotional intelligence, personalization and short and succinct subject lines as the reasons for the human win. For starters, the human team was able to emotionally connect with employees by focusing on a legitimate example within their company, while the AI chose a more generalized topic. Secondly, the recipient’s name was included.
Finally, the human-generated subject line was to the point (“Employee Wellness Survey”) while the AI’s was more lengthy, (“Unlock Your Future: Limited Advancements at Company X”), likely arousing suspicion from the start.
This also led to a higher reporting rate for the AI email (59%), compared to the human phishing report rate of 51%.
Pointing to the subject lines, Carruthers said organizations should educate employees to look beyond traditional red flags.
“We need to abandon the stereotype that all phishing emails have bad grammar,” she said. “That’s simply not the case anymore.”
It’s a myth that phishing emails are riddled with bad grammar and spelling errors, she contended — in fact, AI-driven phishing attempts often demonstrate grammatical correctness, she pointed out. Employees should be trained to be vigilant about the warning signs of length and complexity.
“By bringing this information to employees, organizations can help protect them from falling victim,” she said.
Why is phishing still so prevalent?
Human-generated or not, phishing remains a top tactic among attackers’ because, simply put, it works.
“Innovation tends to run a few steps behind social engineering,” said Carruthers. “This is most likely because the same old tricks continue to work year after year, and we see phishing take the lead as the top entry point for threat actors.”
The tactic remains so successful because it exploits human weaknesses, persuading us to click a link or provide sensitive information or data, she said. For example, attackers take advantage of a human need and desire to help others or create a false sense of urgency to make a victim feel compelled to take quick action.
Furthermore, the research revealed that gen AI offers productivity gains by speeding up hackers’ ability to create convincing phishing emails. With that time saved, they could turn to other malicious purposes.
Organizations should be proactive by revamping their social engineering programs — to include the simple-to-execute vishing, or voice call/voicemail phishing — strengthen identity and access management (IAM tools) and regularly update TTPS, threat detection systems and employee training materials.
“As a community, we need to test and investigate how attackers can capitalize on generative AI,” said Carruthers. “By understanding how attackers can leverage this new technology, we can help [organizations] better prepare for and defend against these evolving threats.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link