- The plugin is made to defend against upcoming generative AI-specific attacks, such as prompt injections and “AI hallucinations.”
- When used with its Supply Chain Threat Intelligence, the CheckAI plugin offers defense against harmful open-source packages and dependencies.
Checkmarx Ltd., a company executing application security testing, unveiled the CheckAI GPT Plugin to identify and block potential attacks against ChatGPT-generated code.
Developers and security teams can protect themselves from attacks launched by malicious open-source packages and dependencies inside the ChatGPT interface using the CheckAI GPT Plugin. It offers a thorough security framework when used in conjunction with Checkmarx’s Supply Chain Threat Intelligence.
Development teams can use the plugin by using GenAI technologies like ChatGPT while adhering to AppSec rules. It enables vulnerability scanning of GPT-generated code within the ChatGPT interface and gives immediate feedback on flaws or open-source package validation.
The plugin is made to defend against upcoming generative AI-specific attacks, such as prompt injections and “AI hallucinations,” when an AI system generates aberrant or unexpected outcomes due to biases, constraints, or errors in the training data or algorithms. These results are not based on facts or pertinent context.
CheckAI GPT’s use cases will be expanded in the subsequent updates to include application programming interface validation, infrastructure-as-code validations, and prompt protection.
Checkmarx One, an application security platform that provides scalability and smooth interaction inside preferred development environments, is the engine of CheckAI GPT. When used with its Supply Chain Threat Intelligence, the CheckAI plugin offers defense against harmful open-source packages and dependencies.
Security vetting systems and plugins like Checkmarx’s CheckAI GPT are projected to become more common concerning adopting AI technology as more individuals adopt AI, particularly at the enterprise level.