Highlights:

  • More than 13,000 community members are searching the AI and ML supply chain for significant vulnerabilities because of the bug bounty program’s great success.
  • Through research and the bug bounty program, Protect AI has discovered that specific security vulnerabilities can affect the tools used in the supply chain to create the ML models that drive AI applications.

Protect AI Inc., a cybersecurity business focused on artificial intelligence (AI) and machine learning (ML), has published a new study highlighting critical flaws in such systems that were recently discovered through its bug bounty program.

Former Oracle Corp. and Amazon Web Services Inc. employees formed Protect AI in 2022. Among them was Ian Swanson, the CEO of AWS and the former global leader in ML and AI. The company provides solutions that enable enterprises to view, understand, and control their ML environments, resulting in safer AI applications.

One of its products is a bug bounty program, Protect AI says, is the first for finding vulnerabilities in AI and ML. More than 13,000 community members are searching the whole AI and ML supply chain for significant vulnerabilities because of the program’s great success.

Through research and the bug bounty program, Protect AI has discovered that specific security vulnerabilities can affect the tools used in the supply chain to create the ML models that drive AI applications. Numerous open-source tools, frameworks, and artifacts are hazardous since they may come with built-in vulnerabilities that allow for direct system takeovers, such as local file inclusion or unauthenticated remote code execution.

There was a severe risk of server takeover and sensitive data loss from the first vulnerability found. It was discovered that the code for retrieving remote data storage in the popular model storing and tracking tool MLflow has a severe vulnerability. The vulnerability might allow attackers to run commands on the user’s machine by tricking users into connecting to a malicious remote data source.

The Arbitrary File Overwrite vulnerability was another security flaw found in MLflow. A flaw in MLflow’s validation function, which verifies the security of file paths, was the cause of the vulnerability. Malicious actors could use this to overwrite files on the MLflow server remotely.

The Local File Include issue was MLflow’s third vulnerability. When hosted on specific operating systems, the vulnerability enables MLflow to reveal the content of private files unintentionally. It was discovered that a bypass in the file path safety mechanism was the source of the exposure. If SSH keys or cloud keys were available to MLflow with the necessary permissions, there might be severe consequences, such as the loss of confidential data or even a total takeover of the system.

The maintainers were notified of every vulnerability at least 45 days before they were published. All the flaws highlight the necessity of strict security controls in AI and ML tools because of their access to private and sensitive information.