DOD Cyber Security Blogs Machines used by Hugging Face backdoor users

Machines used by Hugging Face backdoor users are maniac AI models.

On the Hugging Face platform, at least 100 malicious AI ML models were discovered, some of which could execute code on the victim’s machine, exposing attackers to a persistent backdoor.

Hugging Face is a tech company that specializes in machine learning ( ML), natural language processing ( NLP), and artificial intelligence ( AI ). It provides a platform for collaboration and sharing models, datasets, and complete applications.

Around 100 models hosted on the platform have malicious functionality, according to JFrog’s security team, which poses a significant risk of data breaches, espionage, and other threats.

Despite reviewing the models ‘ functionality and looking into behaviors like unsafe deserialization, Hugging Face’s security measures, including malware, pickle, and secrets scanning.

Achieving code execution via an AI model
Using an AI model ( JFrog ) to execute code

Malicious AI ML models

One hundred PyTorch and Tensorflow Keras models hosted on Hugging Face were examined by JFrog’s advanced scanning system, which included some form of malicious functionality.

The JFrog report states that” when we refer to” malicious models,” we specifically denote those housing real, harmful payloads.”

” This count excludes false positives, providing a true representation of the distribution of efforts to create malicious models for PyTorch and Tensorflow on Hugging Face.”

Payload types found in malicious models
Payload types discovered in shady ( JFrog ) models

A payload that enabled it to establish a reverse shell on a specified host ( 210.117.212.93 ) was one highlighted example of a PyTorch model that was recently uploaded by a user named “baller423” and has since been removed from HuggingFace.

By embedding the malicious code within the trusted serialization process, the malicious payload avoided detection by using Python’s pickle module’s” __reduce_ _” method to execute arbitrary code upon loading a PyTorch model file.

Payload that establishes a reverse shell
establishing a reverse shell ( JFrog ) payload

In separate instances, JFrog discovered the same payload connecting to different IP addresses, with evidence suggesting that its creators might be AI researchers rather than hackers. Their experimentation remained risky and inappropriate, though.

The analysts used a HoneyPot to attract and analyze the activity during the established connectivity period ( one day ) but were unable to capture any commands during this time.

Setting up honeypot to entrap the attacker
setting up a honeypot to catch the attacker ( JFrog ).

Some of the malicious uploads, according to JFrog, may be part of security research designed to evade security measures on Hugging Face and collect bug bounty payments, but since the dangerous models become widely available, the risk is real and should n’t be taken lightly.

Significant security risks can be posed by AI ML models, and those risks have n’t been properly considered or discussed by stakeholders and technology developers.

JFrog ‘s&nbsp findings highlight this issue and call for increased vigilance and proactive steps to protect the ecosystem from liars.

Skip to content