LLMjacking: Tactics and best practices for defense

LLMjacking: Tactics and best practices for defense
Advertising

Share post

Since its discovery in May 2024, LLMjacking has evolved along with large-scale language models (LLMs) themselves. Attackers are constantly developing new motifs and methods to conduct LLMjacking, including rapidly expanding to new LLMs such as DeepSeek.

As the Sysdig Threat Research Team (TRT) reported back in September, the frequency and popularity of LLMjacking attacks are increasing. Given this trend, we weren't surprised that DeepSeek was targeted within days of its media exposure and subsequent surge in usage. LLMjacking attacks have also garnered significant public attention, including a lawsuit filed by Microsoft against cybercriminals who stole credentials and used them to abuse its generative AI (GenAI) services. The lawsuit alleged that the defendants used DALL-E to generate offensive content. In our September LLMjacking Update, we provided examples of attackers generating images in a benign manner.

Advertising
Table of estimated costs of LLM abuse (Source: Sysdig 2025)

🔎Table of estimated costs of LLM abuse (Source: Sysdig 2025)

The costs of cloud-based LLM usage can be enormous, exceeding several hundred thousand dollars per month. Sysdig TRT found more than a dozen proxy servers using stolen credentials for a variety of services, including OpenAI, AWS, and Azure. The high cost of LLMs is the reason why cybercriminals prefer to steal credentials rather than pay for LLM services. An image containing text, screenshot, font, or number. AI-generated content can be corrupted.

LLMjackers are quickly adopting DeepSeek

Attackers implement the latest models quickly after their release. For example, DeepSeek released its advanced DeepSeek-V26 model on December 2024, 3, and a few days later, it was already implemented in an ORP (oai-reverse-proxy) instance hosted on HuggingFace:

Advertising

This instance is based on a fork of the ORP, where they uploaded the commit with the DeepSeek implementation. A few weeks later, on January 20, 2025, DeepSeek released a reasoning model called DeepSeek-R1, and the next day, the author of this fork repository implemented it.

Not only has support for new models like DeepSeek been implemented, but we've also seen several ORPs equipped with DeepSeek API keys, and users are starting to use them.

LLMjacking tactics, techniques and procedures

LLMjacking is no longer just a possible fad or trend. Communities have formed where tools and techniques are shared. ORPs are being segmented and customized specifically for LLMjacking operations. Cloud credentials are being tested for LLM access before being sold. LLMjacking operations are beginning to establish a unique set of TTPs, some of which we have identified below.

Communities

There are many active communities that use LLMs for adult content and create AI characters for role-playing. These users prefer to communicate via 4chan and Discord. They share access to LLMs via ORPs, both private and public. While 4chan threads are regularly archived, summaries of tools and services are often available on the pastbin-style website Rentry.co, which is a popular choice for sharing links and related access information. Websites hosted on Rentry can use Markdown, provide custom URLs, and be edited after publication.

While investigating LLMjacking attacks in our cloud honeypot environments, we discovered multiple TryCloudflare domains in the LLM prompt logs, where the attacker used the LLM to generate a Python script that interacted with ORPs. This led to backtracking to servers using TryCloudFlare tunnels.

Credential Theft

Attackers steal credentials through vulnerabilities in services like Laravel and then use the following scripts as verification tools to determine if the credentials are suitable for accessing ML services. Once access to a system is gained and credentials are found, attackers run their verification scripts on the collected credentials. Another popular source of credentials is software packages in public repositories that can expose this data.

All scripts have some common features: concurrency to be more efficient with a large number of (stolen) keys, and automation.

Best practices for detecting and combating LLMjacking

LLMjacking is primarily carried out by compromising credentials or access keys. LLMjacking is so widespread that MITRE has included LLMjacking in its Attack Framework to raise awareness of this threat and help defenders capture this attack.

Defending against AI service account compromise primarily involves securing access keys, implementing strong identity management, monitoring for threats, and ensuring least-privilege access. Here are some best practices for protecting against account compromise:

Secure access key

Access keys are an important attack vector and should therefore be carefully managed.

  • Avoid hard-coding credentials: Do not embed API keys, access keys, or credentials in source code, configuration files, or public repositories (e.g., GitHub, Bitbucket). Instead, use environment variables or secret management tools such as AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault.
  • Use temporary credentials: Use temporary security credentials instead of persistent access keys. For example, AWS STS AssumeRole, Azure Managed Identities, and Google Cloud IAM Workload Identity.
  • Rotate AccessKeys: Rotate access keys regularly to reduce vulnerability. Automate the rotation process wherever possible.
  • Monitor exposed credentials: Use automated scans to identify vulnerable credentials. Examples include AWS IAM Access Analyzer, GitHub Secret Scanning, and TruffleHog.
  • Monitor account behavior: When an account key is compromised, it typically deviates from normal behavior and begins performing suspicious actions. Continuously monitor your cloud and AI service accounts with tools like Sysdig Secure.

Conclusion

As demand for access to advanced LLMs has increased, LLMjacking attacks have become increasingly popular. Due to the high costs, a black market for access to OAI reverse proxies has developed, and underground service providers have emerged to meet consumer needs. LLMjacking proxy operators have expanded access to credentials, adapted their offerings, and begun integrating new models such as DeepSeek.

Legitimate users have now become a prime target. With unauthorized use of accounts resulting in hundreds of thousands of dollars in losses for victims, proper management of these accounts and their associated API keys has become critical.

LLMjacking attacks continue to evolve, as do the motives that drive them. Ultimately, attackers will continue to attempt to gain access to LLMs and find new malicious uses for them. It's up to users and organizations to prepare, detect, and defend against them.

More at Sysdig.com

 


About Sysdig

In the cloud, every second counts. Attacks move at warp speed, and security teams must protect the business without slowing it down. Sysdig stops cloud attacks in real time by instantly detecting changes in risk with runtime insights and open source Falco.


 

Matching articles on the topic

Modern CIOs have diverse tasks

The role of modern CIOs has changed significantly: In the past, CIOs were primarily responsible for maintaining companies’ IT operations. ➡ Read more

Over 130.000 data breaches in Europe in 2024

In the 15 European nations, there were over 2024 data breaches every day in 365, according to the results of a recent analysis. In Germany ➡ Read more

DDoS attacks: the most important means of cyber warfare

In the second half of 2024, there were at least 8.911.312 DDoS attacks worldwide, according to the results of a recent DDoS Threat Intelligence Report. ➡ Read more

Cybercrime: Russian-speaking underground is leading

A new research report provides a comprehensive insight into the Russian-speaking cyber underground, an ecosystem that has fueled global cybercrime in recent ➡ Read more

Cyber ​​Resilience Act: Companies should act now

The Cyber ​​Resilience Act (CRA) is coming in leaps and bounds. This means that manufacturers will soon no longer be able to ➡ Read more

Use of AI/ML tools increased by 3000 percent

AI/ML tools are popular, according to the findings of a recent threat report. However, their increased use also brings with it security risks. Cybercriminals ➡ Read more

Vishing: Criminals rely on voice phishing attacks

Using AI-generated deepfakes, cybercriminals imitate trusted voices. Vishing exploded in the second half of 2024, according to the results of a ➡ Read more

Digital Trust Index: Trust in digital services is declining

Digital trust or fear of a data breach influences whether consumers turn to or away from brands, according to the results ➡ Read more