Highlights:

  • The research finds that almost 50% of applications don’t consider security-sensitive APIs in their code base, highlighting the rise of services such as OpenAI LP’s ChatGPT application programming interface.
  • Even though 71% of the code in a typical Java app comes from open-source components, the apps only use 12% of the imported code, according to the report.

According to the latest research from dependency lifecycle management firm Endor Labs Inc., artificial intelligence and large language models are unable to effectively classify malware risk in the majority of cases.

Endor’s Station 9 research team put together the “State of Dependency Management 2023” report, which examines upcoming trends that software firms should address as part of their security strategy and the risks involved with employing existing open-source software in application development. The research finds that almost 50% of applications don’t consider security-sensitive APIs in their code base, highlighting the rise of services such as OpenAI LP’s ChatGPT application programming interface.

The report’s key conclusions include that current LLM technologies cannot be reliable in assisting in malware detection and scaling. Instead, the researchers discovered that LLMs correctly classify malware danger in only 5% of all cases.

While AI and LLM models are helpful in manual processes, they will never be utterly dependable in autonomous workflows because they cannot be programmed to recognize novel approaches like those derived from LLM recommendations.

According to the survey, 45% of applications have no calls to security-sensitive APIs in their code base, which lowers to 5% when dependencies are considered. The findings suggest that enterprises consistently underestimate risk when they do not examine their use of such APIs in light of open-source dependence.

Java is also mentioned. Even though 71% of the code in a typical Java app comes from open-source components, the apps only use 12% of the imported code, according to the study. Vulnerabilities in underused code are rarely exploitable, but with reliable insights on which code is reachable throughout an application, businesses can avoid or deprioritize 60% of repair labor.

ChatGPT’s API is already used in 900 Node Package Manager and Python Package Index packages spanning many problem domains. Three-quarters of these were discovered to be entirely new packages. According to the report, the combination of rapid expansion and a shortage of historical data possibly invites attacks.