Highlights:

  • Guardian combines the strengths of Protect AI’s open-source tools, allowing for enterprise-level enforcement and administration of model security.
  • Guardian also offers advanced access control features and dashboards, providing security teams with control over model entry and in-depth insights into model origins.

Protect AI Inc., a startup specializing in artificial intelligence and machine learning cybersecurity, has introduced Guardian, a secure gateway designed to empower organizations in enforcing security policies on machine learning models. This helps prevent the infiltration of malicious code into their environment.

Built upon Protect AI’s open-source tool, ModelScan, the service assesses machine learning models to identify any presence of unsafe code. Guardian consolidates the strengths of Protect AI’s open-source solution, facilitating enterprise-level enforcement and management of model security. Additionally, the service expands its coverage through proprietary scanning capabilities.

The service was created in response to security concerns regarding open-source foundation models on platforms like Hugging Face. While these models play a crucial role in fueling various AI applications, their utilization also raises security concerns. This is due to the open sharing of files on repositories, creating a potential avenue for the inadvertent propagation of malicious software.

Ian Swanson, Chief Executive of Protect AI, said, “ML models are new types of assets in an organization’s infrastructure, yet they are not scanned for viruses and malicious code with the same rigor as even a PDF file before they are used. There are thousands of models downloaded millions of times from Hugging Face on a monthly basis and these models can contain dangerous code. Guardian enables customers to take back control over open-source model security.”

In contrast to alternative open-source options, Guardian by Protect AI functions as a secure gateway, connecting machine learning development and deployment that involve repositories like Hugging Face. Guardian utilizes exclusive vulnerability scanners tailored for open-source models.

Guardian offers advanced access control features and dashboards, allowing security teams to manage model entry and gain detailed insights into model origins, creators, and licensing. It seamlessly integrates with current security frameworks and enhances Protect AI’s Radar, providing comprehensive visibility into the threat surface of AI and machine learning within organizations.

While Guardian is a recent addition, the foundational open-source technology developed by Protect AI, ModelScan, has existed. Launched by Protect AI last year, ModelScan has assessed over 400,000 models on Hugging Face, identifying unsafe models with a nightly refreshed knowledge base. So far, over 3,300 models have been identified with the capability to execute malicious code.

Protect AI, a startup backed by venture capital, secured its latest funding of USD 35 million in July. Notable investors in the company include Evolution Equity Partners LLP, Boldstart Ventures LLC, Pelion Ventures Partners LLC, Salesforce Ventures LLC, Knollwood Capital LLC, and Acrew Capital LP.