Cyber Defense Advisors

Meta’s Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks

Llama Framework

A high-severity security flaw has been disclosed in Meta’s Llama large language model (LLM) framework that, if successfully exploited, could allow an attacker to execute arbitrary code on the llama-stack inference server.

The vulnerability, tracked as CVE-2024-50050, has been assigned a CVSS score of 6.3 out of 10.0. Supply chain security firm Snyk, on the other hand, has assigned it a critical severity rating of 9.3.

“Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized,” Oligo Security researcher Avi Lumelsky said in an analysis earlier this week.

The shortcoming, per the cloud security company, resides in a component called Llama Stack, which defines a set of API interfaces for artificial intelligence (AI) application development, including using Meta’s own Llama models.

Specifically, it has to do with a remote code execution flaw in the reference Python Inference API implementation, was found to automatically deserialize Python objects using pickle, a format that has been deemed risky due to the possibility of arbitrary code execution when untrusted or malicious data is loading using the library.

Cybersecurity

“In scenarios where the ZeroMQ socket is exposed over the network, attackers could exploit this vulnerability by sending crafted malicious objects to the socket,” Lumelsky said. “Since recv_pyobj will unpickle these objects, an attacker could achieve arbitrary code execution (RCE) on the host machine.”

Following responsible disclosure on September 24, 2024, the issue was addressed by Meta on October 10 in version 0.0.41. It has also been remediated in pyzmq, a Python library that provides access to the ZeroMQ messaging library.

In an advisory issued by Meta, the company said it fixed the remote code execution risk associated with using pickle as a serialization format for socket communication by switching to the JSON format.

This is not the first time such deserialization vulnerabilities have been discovered in AI frameworks. In August 2024, Oligo detailed a “shadow vulnerability” in TensorFlow’s Keras framework, a bypass for CVE-2024-3660 (CVSS score: 9.8) that could result in arbitrary code execution due to the use of the unsafe marshal module.

The development comes as security researcher Benjamin Flesch disclosed a high-severity flaw in OpenAI’s ChatGPT crawler, which could be weaponized to initiate a distributed denial-of-service (DDoS) attack against arbitrary websites.

The issue is the result of incorrect handling of HTTP POST requests to the “chatgpt[.]com/backend-api/attributions” API, which is designed to accept a list of URLs as input, but neither checks if the same URL appears several times in the list nor enforces a limit on the number of hyperlinks that can be passed as input.

Llama Framework

This opens up a scenario where a bad actor could transmit thousands of hyperlinks within a single HTTP request, causing OpenAI to send all those requests to the victim site without attempting to limit the number of connections or prevent issuing duplicate requests.

Depending on the number of hyperlinks transmitted to OpenAI, it provides a significant amplification factor for potential DDoS attacks, effectively overwhelming the target site’s resources. The AI company has since patched the problem.

“The ChatGPT crawler can be triggered to DDoS a victim website via HTTP request to an unrelated ChatGPT API,” Flesch said. “This defect in OpenAI software will spawn a DDoS attack on an unsuspecting victim website, utilizing multiple Microsoft Azure IP address ranges on which ChatGPT crawler is running.”

The disclosure also follows a report from Truffle Security that popular AI-powered coding assistants “recommend” hard-coding API keys and passwords, a risky piece of advice that could mislead inexperienced programmers into introducing security weaknesses in their projects.

“LLMs are helping perpetuate it, likely because they were trained on all the insecure coding practices,” security researcher Joe Leon said.

News of vulnerabilities in LLM frameworks also follows research into how the models could be abused to empower the cyber attack lifecycle, including installing the final stage stealer payload and command-and-control.

Cybersecurity

“The cyber threats posed by LLMs are not a revolution, but an evolution,” Deep Instinct researcher Mark Vaitzman said. “There’s nothing new there, LLMs are just making cyber threats better, faster, and more accurate on a larger scale. LLMs can be successfully integrated into every phase of the attack lifecycle with the guidance of an experienced driver. These abilities are likely to grow in autonomy as the underlying technology advances.”

Recent research has also demonstrated a new method called ShadowGenes that can be used for identifying model genealogy, including its architecture, type, and family by leveraging its computational graph. The approach builds on a previously disclosed attack technique dubbed ShadowLogic.

“The signatures used to detect malicious attacks within a computational graph could be adapted to track and identify recurring patterns, called recurring subgraphs, allowing them to determine a model’s architectural genealogy,” AI security firm HiddenLayer said in a statement shared with The Hacker News.

“Understanding the model families in use within your organization increases your overall awareness of your AI infrastructure, allowing for better security posture management.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

 

Leave feedback about this

  • Quality
  • Price
  • Service
Choose Image