Highlights:

  • Copilot uses the most recent technology based on OpenAI LP’s GPT-4 to let cybersecurity experts inquire about and receive responses to current security issues affecting their environment.
  • Microsoft combined its threat analysis footprints with the strength of OpenAI’s large language model to create the product.

Microsoft Corp. recently unveiled Security Copilot, a new tool that streamlined cybersecurity professionals’ work and addressed security threats with a simple-to-use artificial intelligence assistant. Security professionals must manage various tools and frequently overwhelming amounts of data from numerous sources.

Copilot uses the most recent technology based on OpenAI LP’s GPT-4 to let cybersecurity experts inquire about and receive responses to current security issues affecting their environment. It can even use internal company knowledge to return information the team can use, learn from previous intelligence, correlate recent threat activity with data from other tools, and provide up-to-date information.

Microsoft Security’s corporate Vice President Vasu Jakkal said, “Today, the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers.”

Microsoft combined its threat analysis footprints with the strength of OpenAI’s large language model to create the product. It can understand questions and summarize threat reports from a company’s cybersecurity team and external data. Microsoft’s model receives over 65 trillion threat signals daily and is informed by over 100 data sources.

Professionals can use Security Copilot to quickly start an investigation of critical incidents by diving into the data using natural language. It can use its natural language comprehension to summarize procedures and events and produce brief reports that will help team members catch up more quickly.

It might, for instance, tell the assistant to prepare a report on an incident response report regarding a specific ongoing investigation based on a specific set of tools and events. Based on the given query, the AI assistant can gather data and information from those tools and present a summary, report, and other information to users. Users can then modify their prompt to ask the AI for more details, change how the report is displayed or summarized, which tools it uses, and what incident data it uses.

Jakkal added that by utilizing Microsoft’s global threat intelligence, the assistant could spot anomalies and raise problems by anticipating and identifying potential threats that experts might miss. They could also use it to improve their abilities, for example, by reverse-engineering a script that might be harmful and sending the results to a colleague to determine whether they should tag the script for further research.

Although it is a helpful tool for security professionals, it has some drawbacks. Microsoft has been quick to caution that it doesn’t “always get everything right” with all of its Copilot tools that integrate GPT-4, and that warning is included with Security Copilot. Microsoft has added the feature for users to flag an AI-generated response as off-target or incorrect to tackle this and make necessary improvements.

That might not be ideal for a cybersecurity product, though. At Gartner Inc., Avivah Litan, the vice president analyst, said, “Security Copilot requires that users check the accuracy of the outputs without giving users any scores as to the likelihood that an output is correct or not. There’s an added danger that users will rely on the outputs and assume they are correct, which is an unsafe approach given we are talking about enterprise security.”

In Microsoft’s demo, the AI made a harmless error by referring to Windows 9, which doesn’t exist. However, in a real-world investigation, an AI could cause an error that would be harder for a security expert to spot and could be passed along. Litan said, “The market has a long way to go before enterprises can comfortably use these services.”

Regarding the security support built into Security Copilot by Microsoft, the company has been on a buying spree to bolster its threat detection abilities. These companies include RiskIQ Inc., a threat management company, and Miburo, a threat analysis company.

The AI assistant integrates with various Microsoft security products, such as Defender, its anti-malware solution, and Sentinel, the company’s enterprise cloud-based cyber threat management solution. The program is, at the moment, accessible in private preview.