eCommerceNews Australia - Technology news for digital commerce decision-makers
Story image

Over half of spam emails now generated by AI, new study finds

Today

New research has revealed that more than half of all spam emails and 14% of business email compromise (BEC) attacks are now generated by artificial intelligence rather than humans.

Barracuda Networks, in partnership with researchers from Columbia University and the University of Chicago, analysed a dataset of malicious emails spanning February 2022 to April 2025. The analysis tracked significant changes in the composition of email threats since the launch of generative AI models such as ChatGPT.

Detection methods

The research team developed detectors capable of identifying whether unsolicited or malicious emails were written with the help of AI. They assumed pre-November 2022 emails, prior to the public release of ChatGPT, were largely human-authored. This initial filter allowed the team to establish a baseline for human-written messages.

Running the full 2022 to 2025 dataset through the detector revealed differing trajectories in the adoption of AI-generated content within various types of email attacks. A marked increase in AI-generated spam was recorded following the introduction of ChatGPT, while the use of AI in BEC attacks rose more gradually.

Rise of AI in spam

Researchers found that by April 2025, 51% of spam emails were generated by AI, exceeding the rate of human-written spam. This suggests that the majority of emails currently found in junk or spam folders were authored using large language models.

In contrast, BEC attacks, which typically target executive-level individuals for financial fraud, saw 14% of messages generated by AI as of April 2025. This slower adoption reflects the more targeted nature of BEC attacks, which often require greater precision and personalisation.

AI motivations and tactics

The team investigated why attackers might be leveraging AI for email-based attacks. According to the findings, AI-generated emails typically exhibited greater formality, fewer grammatical mistakes, and higher linguistic sophistication compared to those written by humans. These features could help such emails evade detection and appear more credible to recipients.

Since the majority of email recipients in the Barracuda dataset were in English-speaking countries, AI tools also allowed attackers whose native language is not English to craft more convincing messages. The analysis noted that attackers used AI models to experiment with wording, in a practice similar to A/B testing in marketing, to determine which messages were more effective at bypassing defences and convincing recipients to interact with suspicious links.

The analysis highlighted that urgency—the pressure often exerted to provoke an immediate response—was not significantly different between AI-generated and human-written emails. This suggests that AI is primarily being used to refine email content rather than alter established social engineering tactics.

Cyber attackers are leveraging the power of AI to boost their chances of success in email-based attacks. AI tools can help them to develop and launch more attacks, more frequently, and to make these attacks more evasive, convincing and targeted. But to what extent are they doing these things?

Evolving threats and mitigation

The report concluded that as generative AI becomes more prevalent in cyberattacks, these methods are continuing to evolve. Attackers are refining their strategies, leading to more effective and elusive malicious emails.

While much of the research focused on advances in attack strategies, it also highlighted that AI and machine learning are being harnessed to improve detection and prevention within security solutions. The paper states: "That's why an advanced email security solution equipped with multilayered, AI/ML-enabled detection is crucial."

Education and security awareness were also noted as essential components of defence. The report recommends investment in training for employees to ensure they are aware of current threats and are able to recognise and report suspicious emails, adding that "education also remains a powerful and effective protection against these types of attack."

The research, with contributions from Van Tran, Vincent Rideout, Zixi Wang, Anmei Dasbach-Prisk, M. H. Afifi and Junfeng Yang, as well as professors Ethan Katz-Bassett, Grant Ho, and Asaf Cidon, continues as both attack and defence methods develop with new AI capabilities.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X