Highlights:

  • The new offering from Sama, formally known as Samasource Impact Sourcing, aims to address the possibility that generative AI models could have problems with privacy protection, public safety precautions, and legal compliance.
  • Sama service simulates scenarios where the model might participate in or encourage illicit activity, such as copyright infringement or fraudulent impersonation.

Data annotation solutions developer Sama launched Sama Red Team, the latest service to help developers proactively enhance the reliability and security of artificial intelligence models.

Sama Red Team safely exposes and fixes problems across text, image, voice search, and other modalities by utilizing the skills of machine learning engineers, applied scientists, and human-AI interaction designers. They also assess a model’s fairness and protections, verify compliance with legal requirements, and more.

The new offering from Sama, formally known as Samasource Impact Sourcing Inc., aims to address the possibility that generative AI models could have problems with privacy protection, public safety precautions, and legal compliance. Before a model is made available to the public, the Sama Red Team tests it for potential exploits and gives developers the information they need to fix the problems.

One of Sama Red Team’s features is its capacity to test models thoroughly to evaluate how well they work in four crucial areas: compliance, privacy, public safety, and fairness. The service’s fairness testing simulates actual situations that can cause the models to produce “discriminatory or offensive content.” To make sure that privacy requirements are respected, privacy testing entails creating prompts intended to force the model to reveal sensitive data, such as passwords, personally identifying information, or confidential information about the model itself.

Another element is public safety testing, where the team simulates becoming adversaries to assess the model’s resilience against real-world threats like cyberattacks, security lapses, or even mass casualty situations.

Sama Red also includes compliance testing because a law or regulation is generally nearby. The service simulates scenarios where the model might participate in or encourage illicit activity, such as copyright infringement or fraudulent impersonation. By doing this, possible flaws in the model’s capacity to control and protect against problems with justice, privacy, public safety, and legal compliance can be found and fixed.

Sama’s team tests a range of cues before assessing the model’s performance. The team will next modify the prompts or design new ones in response to the findings to further investigate the vulnerability. Large-scale tests may also be developed for more data. With over 4,000 annotators on staff, Sama can expand and intensify testing.

According to Tracxn, Sama is a venture capital-backed firm that has raised USD 84.8 million in funding. First Ascent Ventures LLC, Salesforce Ventures LLC, Vistara Growth LP, BESTSELLER Foundation, Ridge Ventures LP, Social Impact Ventures LP, and BlueCrest Capital Management LLP are among the investors.