Highlights:

  • With AI-generated content, cyberthreats are now tricky for businesses and individuals to separate from regular business correspondence.
  • The report provides solutions to combat threats posed by AI while also warning about the use of AI in phishing and other scams.

Abnormal Security Corp., a cybersecurity startup, releases a new report describing how threat actors increasingly use generative artificial intelligence to create more phishing and malware breaches.

The report discusses how hackers use AI tools like Google Bard and OpenAI LP’s ChatGPT to create flawlessly written emails that might pass for official correspondence.

The researchers from Abnormal Security examined several recent attacks. They found that these fake emails might be in various shapes, including vendor fraud, credential phishing, and sophisticated BEC schemes. In the past few years, it was simpler to identify phishing emails by looking for grammar errors, unusual phrasing, or other irregularities. With AI-generated content, cyberthreats are now tricky for businesses and individuals to separate from regular business communication.

The report mentions explicitly using AI in impersonation attacks as a trend. In one instance, a phishing attempt masquerading as Facebook informed users that their page had been flagged as inappropriate and was no longer publishable. The email’s tone and language were comparable to official Facebook communications and were free of grammatical errors.

Another instance from the analysis involved a payroll diversion fraud in which a fake employee requested modifications to their direct deposit information. The email demonstrated the potential risks of AI-powered phishing because it had a professional tone and no apparent signs of compromise.

The report provides solutions to combat threats posed by AI while also serving as a warning about its use in phishing and other scams. The researchers claim that using AI itself is the most effective approach to identify emails produced by the same. In the case of Abnormal, its platform evaluates the text of suspicious emails to determine the probability that an AI language model predicated each phrase.

It has been stated that employing AI to detect AI acts as a warning signal of probable AI participation in the composition of an email, even though it may indicate non-AI-generated emails. In an era where hackers themselves employ AI, its analysis is essential to spot and avoid suspicious intent when combined with other signals.

The report mentioned, “Generative AI will make it nearly impossible for the average employee to tell the difference between a legitimate email and a malicious one, which makes it more vital than ever to stop attacks before they reach the inbox. Modern solutions use AI to understand the signals of known good behavior, creating a baseline for each user and each organization and then blocking the emails that deviate from that — whether they are written by AI or by humans.”