Highlights:

  • The report highlights examples involving the utilization of the ChatGPT API for crafting convincing phishing emails and the development of malicious AI platforms like WormGPT and FraudGPT, which operate without ethical constraints.
  • The report contends that to counter these threats, companies must embrace AI-native cybersecurity measures, surpassing conventional methods. This involves comprehending the identity and behavior of individuals within an organization, contextual communication, and scrutinizing email content.

A recent report from Abnormal Security Corp. warns about the increasing prevalence of artificial intelligence-generated email attacks. The report underscores the growing threat posed by cybercriminals incorporating AI into their everyday tactics.

As outlined in the report, the widespread availability of generative AI technologies, including ChatGPT, has contributed to their exploitation in developing advanced cyber threats. These AI tools empower attackers to generate unique and sophisticated content swiftly, posing a challenge for conventional security software in detecting such threats.

Attackers leverage generative AI to bolster their social engineering tactics. Examples involve the utilization of the ChatGPT API for crafting convincing phishing emails and the development of malicious AI platforms like WormGPT and FraudGPT, which operate without ethical constraints.

The report delves into numerous cases of AI-generated attacks identified by Abnormal, including attempts at delivering malware under the guise of insurance companies, phishing schemes impersonating Netflix for credential theft, and invoice fraud by impersonating a cosmetics brand. These attacks employ sophisticated language and notably lack typical signs of phishing, such as grammatical errors. This heightened level of refinement makes them more convincing to potential victims.

Mike Britton, Chief Information Security Officer at Abnormal Security and the report’s author, highlights that traditional email security solutions face challenges in detecting these AI-generated attacks. Given that these text-based attacks originate from legitimate email services and heavily rely on social engineering, they often elude conventional email security measures. The absence of typical indicators, such as typos or grammatical errors, adds to the difficulty for humans in recognizing these attacks.

Britton took note that Malicious emails “land in employee inboxes where they are forced to make a decision on whether or not to engage and with AI completely eliminating the grammatical errors and typos that were historically telltale signs of an attack, humans are much more likely to fall, victim, than ever before.”

The report contends that to counter these threats, companies must embrace AI-native cybersecurity measures, surpassing conventional methods. This involves comprehending the identity and behavior of individuals within an organization, contextual communication, and scrutinizing email content. These measures provide a more effective defense against AI-generated attacks. Britton added, “For security leaders, this is a wakeup call to prioritize cybersecurity measures to safeguard against these threats before it is too late.”