Highlights:

  • Microsoft and OpenAI sound the alarm on state-backed hackers hijacking AI chatbots for cyberattacks. To combat this, they’ve unveiled joint principles, including blocking malicious accounts, to safeguard AI technology and prevent its exploitation for harm.
  • Microsoft found that Crimson Sandstorm, an Iranian hacking group, used OpenAI. According to the company, the group frequently uses.NET malware in cyberattacks.

A recent Microsoft and OpenAI cybersecurity research recently disclosed that numerous state-sponsored hacking groups utilize artificial intelligence large language models to support their cyberattack campaigns.

The companies shared their findings in two blog posts recently. Additionally, Microsoft and OpenAI outlined principles guiding their endeavors to address the utilization of artificial intelligence by state-sponsored hackers and other malicious actors. The principles encompass best practices, such as implementing measures to block the accounts created by hackers to access large language model-powered chatbots.

The primary focus of the companies’ recently published research is a Russian hacking group identified as Forest Blizzard. As per Microsoft and OpenAI, the group directs its cyber activities toward organizations across various sectors, including defense, energy, and transportation. Microsoft ascertained that Blizzard has been “extremely active in targeting organizations in and related to Russia’s war in Ukraine.”

According to the companies, the group utilized OpenAI services to research satellite communication protocols and radar imaging technology. Moreover, Forest Blizzard employed AI to bolster its script development endeavors. The hackers solicited assistance with scripting tasks, including file manipulation and data selection.

Microsoft and OpenAI cybersecurity research also monitored the activities of a North Korean hacking group identified as Emerald Sleet. The companies concluded that the group has been employing spear-phishing emails to gather intelligence from experts on North Korea. Emerald Sleet located these specialists using OpenAI services, created content that “would likely be for use in spear-phishing campaigns,” and researched scripting methodologies.

Microsoft identified that an Iranian hacking group called Crimson Sandstorm also utilized OpenAI services. According to the company, the group frequently conducts cyberattacks using malware based on the .NET framework. Crimson Sandstorm employed OpenAI services to develop .NET code and research methods for turning off antivirus applications.

Microsoft also presented research concerning two Chinese hacking groups in its blog post. The first group, tracked as Charcoal Typhoon, has targeted organizations across various industries, including the defense sector. According to Microsoft, the hackers have recently been conducting “limited exploration” into how large language models could be utilized to understand commodity cybersecurity tools.

The second hacking group, tracked as Salmon Typhoon, is characterized by Microsoft as a sophisticated threat actor with a track record of targeting organizations within the U.S. defense sector. According to the company, Salmon Typhoon utilized OpenAI services for troubleshooting code errors and conducting research on intelligence agencies, malware development techniques, and other related topics.

Alongside the Microsoft and OpenAI cybersecurity research, they recently outlined a set of principles that will inform their endeavors to address the utilization of AI by hackers.

The first principle specifies that the companies will actively endeavor to prevent threat actors from utilizing their AI services. Their efforts in this area will encompass various offerings, including chatbots like ChatGPT, application programming interfaces (APIs), and other related services. Once OpenAI and Microsoft identify that hackers are using their services, they will take “appropriate action to disrupt their activities, such as disabling their accounts, terminating services, or limiting access to resources.”

The other principles outlined by the companies encompass a range of related best practices. They intend to share data regarding the utilization of AI by state-sponsored threat actors with other industry stakeholders and also aim to inform the public about significant developments in this domain. OpenAI, on its end, stated that it would utilize the data it gathers about hacker groups’ activities to strengthen its AI systems’ safety mechanisms.