Highlights:

  • Attempts to enter personally identifiable information were found in over half of the data loss prevention (DLP) incidents that Menlo Security observed in the past thirty days.
  • Researchers at Menlo Security surmise that the surge may be partially attributable to the numerous AI platforms that have introduced file upload capabilities in the last six months.

Sensitive and personally identifiable information is present in 55% of all generative AI inputs, according to a new analysis published by cloud security firm Menlo Security Inc.

The discovery was among several outcomes examined in Menlo’s report titled “The Continued Impact of Generative AI on Security Posture.” This report scrutinized the evolving patterns of employee utilization of generative AI inputs and the resulting organizational security vulnerabilities that these patterns introduce.

Over the course of the last thirty days, Menlo Security has detected more than half of the data loss prevention or DLP events that have occurred. These events entail attempts to enter personally identifiable information. Confidential documents emerged as the second most prevalent data type that elicited DLP detections, accounting for 40% of input attempts.

The report describes how the market and nature of generative AI usage changed significantly between July and December last year. Enterprises are facing new cybersecurity concerns due to the rising popularity of new platforms and services.

An 80% rise in file upload attempts to generative AI websites was seen in one report example. Researchers at Menlo Security surmise that the surge may be partially attributable to the numerous AI platforms that have introduced file upload capabilities in the last six months. Rapidly, the incorporation of file upload capabilities into publicly accessible generative AI models was exploited.

While uploading files has become more common, copy-and-paste attempts to generative AI websites have only slightly dropped at the same time and are still quite common. Given the simplicity and speed at which data might be uploaded and input—including source code, customer lists, roadmap plans, or personally identifiable information—the two generative AI applications had the biggest influence on data loss.

Positively, it was discovered that businesses were aware of the issue and were concentrating more on protecting against data loss and leakage brought on using generative AI. The Menlo Labs Threat Research team found that organizational security policies for generative AI sites have increased by 26% over the previous six months. Nevertheless, most of them are doing it application-by-application instead of defining guidelines for all generative AI applications combined.

According to the report, organizations that apply restrictions on an application-by-application basis run the danger of having to either update their list of applications frequently or leave employees’ access to generative AI sites unprotected. To do this, a scalable and effective method for keeping an eye on employee behavior, adjusting to the new functionalities that generative AI systems bring, and managing the associated cybersecurity threats must be in place.

Other findings from the survey show that 92% of organizations that create security policies based on particular applications have controls in place that are specifically targeted at generative AI, while the remaining 8% permit unfettered use. 79% of those using group-level security policies for generative AI applications implement rules that are security-focused, while 21% permit unrestricted use.

In addition, file submissions to generative AI sites increase by 70% when the category as a whole is considered, as opposed to focusing on the top six sites. The fact that they do so highlights the difficulties and constraints associated with implementing application-specific security policies and the wider concern regarding the susceptibility of data in the ever-changing domain of generative AI.

Pejman Roshan, Menlo Security Chief Marketing Officer, said, “While we’ve seen a commendable reduction in copy and paste attempts in the last six months, the dramatic rise of file uploads poses a new and significant risk. Organizations must adopt comprehensive, group-level security policies to effectively eliminate the risk of data exposure on these sites.”