Study finds humans beat ChatGPT when creating phishing attacks

Study finds humans beat ChatGPT when creating phishing attacks

Cybersecurity training services company Hoxhunt Ltd. today released the findings of a new study into the effectiveness of ChatGPT-generated phishing attacks, and though the technology continues to improve, humans actually delivered better results.

The study analyzed over 53,000 email users in over 100 countries, comparing the win rate of simulated phishing attacks created by human social engineers and those created by large artificial intelligence language models. Although the potential for ChatGPT to be used for malicious phishing activity exists, human social engineers outperform AI in inducing clicks on malicious links.

The ratio between success rates of phishing emails created by humans and ChatGPT was significant, with human “red teamers” inducing a 4.2% click rate versus a 2.9% click rate by ChatGPT in the population sample of email users. Humans outperformed AI in convincing other humans to do something by 69%.

The study also revealed that users with more experience in a security awareness and behavior change program displayed significant protection against phishing attacks by human and AI-generated emails, with failure rates dropping from over 14% with less trained users to between 2% and 4% with experienced users.

The idea that ChatGPT can be used for good and evil isn’t a new idea, but one still not heavily studied in terms of nefarious use. The study found that AI does create opportunities for both the attacker and the defender. Although large language model-augmented phishing attacks do not yet perform as well as human social engineering, the researchers state that the gap will likely close and attackers are already using AI.

“It’s imperative for security awareness and behavior change training to be dynamic with the evolving threat landscape in order to keep people and organizations safe from attacks,” the study notes.

Discussing the study’s findings, Melissa Bischoping, director, Endpoint Security Research at endpoint management company Tanium Inc., told SiliconANGLE that AI does present new opportunities for efficiency, creativity and personalization of phishing lures. But she said it’s important to remember the protections against such attacks remain essentially unchanged.

“It may be a good opportunity to update awareness training programs to inform employees about the emerging technologies and trends in phishing/smishing/vishing tactics to encourage increased vigilance and a ‘think before you click’ culture,” Bischoping explained. “We will potentially see increases in highly customized and convincing lures at scale. It’s much easier and much faster today for a threat actor to ask an AI to compose a message asking someone in a specific industry to do something and tie in relevant and convincing details.”

Mika Aalto, co-founder and chief executive officer at Hoxhunt, said that “we now know from the results of our study that effective, existing security awareness and behavior change programs protect against AI-augmented phishing attacks.”

“Within your holistic cybersecurity strategy, be sure to focus on your people and their email behavior because that is what our adversaries are doing with their new A.I. tools,” Aalto added. “Embed security as a shared responsibility throughout the organization with ongoing training that enables users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit.”

Image: Hoxhunt

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.