Security
Mar 26, 2026

The LiteLLM Supply Chain Attack: What Happened, Why It Matters, and What to Do Next

How a compromised AI dependency turned into a widespread credential-stealing attack—and what developers and organizations must do now.

Give me the TL;DR
The LiteLLM Supply Chain Attack: What Happened, Why It Matters, and What to Do Next
For Qualys admins, NES for .NET directly resolves the EOL/Obsolete Software:   Microsoft .NET Version 6 Detected vulnerability, ensuring your systems remain secure and compliant. Fill out the form to get pricing details and learn more.

If you’ve been anywhere near Python, AI tooling, or cloud-native development lately, you’ve probably heard about LiteLLM. It’s one of those libraries that quietly sits in the middle of everything, abstracting away the differences between LLM providers, like OpenAI and Anthropic, and making it easy to plug AI into apps.

Which is exactly why what happened next was such a big deal.

In March 2026, LiteLLM became the center of a major software supply chain attack, where attackers slipped malicious code into official PyPI releases. This resulted in a credential-stealing payload that potentially exposed secrets across cloud environments, CI/CD pipelines, and developer machines.

If you have any Python projects, check if you have LiteLLM listed as a dependency and check the version. If it’s LiteLLM 1.82.7 or 1.82.8 you need to uninstall the package immediately, rotate all of your credentials, and audit your environment to see if you were impacted.

Let’s break this down and talk about what happened, how it worked, why LiteLLM was vulnerable, and what you should do about it in detail.

What is LiteLLM?

LiteLLM is essentially a unified API wrapper for multiple LLM providers (OpenAI, Anthropic, Azure, etc.). Instead of juggling SDKs, developers can just call LiteLLM and route requests wherever they want.

That convenience comes with massive adoption with the package having millions of downloads daily. It’s also embedded in AI apps, backend services, CI/CD pipelines, and cloud infrastructure. So LiteLLM sits in environments that typically have access to high-value secrets.

How LiteLLM Was Compromised on PyPI

It was first discovered by engineers at FutureSearch. They were testing a Cursor MCP plugin that pulled in LiteLLM as a transitive dependency. After Python started, the machine became unresponsive due to RAM exhaustion. Then they traced it to the LiteLLM package and saw a litellm_init.pth file, a 34,628-byte, double base64-encoded file in site-packages/.

The LiteLLM PyPI package was compromised by an attacker who gained access to the maintainer's PyPI account. Attackers then published malicious versions (1.82.7 and 1.82.8) of LiteLLM to PyPI. So developers that ran a normal install command like:

pip install litellm

could unknowingly have pulled down the compromised version. These malicious versions were live for about two hours before being removed, but that’s all it took to have a huge impact.

The package maintainers are doing their best to handle the fallout. They even started posting updates on HackerNews since their GitHub account was also compromised. The compromise wasn’t even in LiteLLM directly. It came from a Trivy dependency.

A Multi-Stage Credential Stealer

The malicious code embedded in LiteLLM was a multi-stage attack chain designed to maximize reach and stealth. It works in these stages:

Stage 1: Initial execution

Once installed, the package executed code that hooked into the runtime, began collecting environment data, and prepared additional payload stages.

Stage 2: Data harvesting

The malware aggressively targeted:

  • Environment variables (API keys, tokens)
  • Cloud credentials (AWS, GCP, Azure)
  • Kubernetes configs
  • SSH keys
  • Docker configs
  • CI/CD secrets
  • Database credentials
  • Even crypto wallets

Stage 3: Exfiltration

Collected data was then packaged and sent to attacker-controlled infrastructure. This is a classic infostealer pattern, but applied to developer environments instead of consumer machines.

Why This Attack Was So Dangerous

There are a few reasons this incident caused so much alarm.

The blast radius was massive. LiteLLM is present in roughly 36% of cloud environments. Even if you don’t directly use it, other packages you use could have it as a transitive dependency and you will still be affected.

That means one compromised dependency could affect a huge portion of modern AI infrastructure. Unlike typical malware, this wasn’t going after random files. It specifically targeted cloud credentials, API keys, and infrastructure configs.

It required almost no effort to trigger. No phishing. No social engineering.

Just pip install litellm or start a Python app with the affected versions. This was part of a broader campaign. The attack was linked to a larger supply chain compromise tied to the TeamPCP group and a previous breach involving Trivy and other tools. So this was part of a coordinated effort.

How the LiteLLM Supply Chain Attack Actually Worked

Let’s zoom in on the mechanics of how this attack worked.

  • Compromise upstream or publishing process. Attackers likely gained access to publishing credentials or exploited upstream dependency compromise.
  • Publish malicious versions. They uploaded legitimate-looking releases with embedded malware.
  • Wait for installs. Since LiteLLM is widely used, automated builds pulled new versions and developers installed updates without suspicion.
  • Execute on install/runtime. Python packages can run code during install or when imported. This allowed malware execution immediately.
  • Steal and exfiltrate secrets. The payload enumerated sensitive data and sent it externally.

Patching and Remediation Options

If you used LiteLLM recently, immediately check your installed versions and remove any compromised versions. Then rotate all credentials and keep an eye on the logs for anomalies.

If you run CI/CD pipelines, audit any recent builds, check for leaked secrets, and rotate pipeline tokens. If you manage infrastructure, review access logs, monitor for unusual activity, and revoke suspicious sessions.

If you’re a developer, introduce safer habits like pinning dependencies, reviewing updates before installing, using virtual environments, and limiting credential exposure. If you’re an organization, invest in software supply chain security, dependency scanning, and runtime monitoring.

Make sure you at least take these steps to secure your system:

1. Upgrade immediately

Move to a known clean version and avoid compromised releases (1.82.7, 1.82.8). As a general rule, always upgrade to the latest secure version.

2. Rotate all credentials

This is critical. If you installed affected versions assume everything is compromised. You should rotate all API keys, cloud credentials, SSH keys, and CI/CD tokens. PyPI explicitly recommends this approach.

3. Audit your environment

Check all of your environments, even locally, for unauthorized access, new API usage, and suspicious outbound traffic. You also need to check your system for the litellm_init.pth file in site-packages/.

4. Rebuild from clean state

For high-risk environments, you should consider rebuilding containers and reprovisioning infrastructure.

5. Pin dependencies

Pin dependency versions to avoid auto-updating:

litellm==1.82.6

6. Add supply chain protections

Use tools for dependency scanning, signature verification, and SBOM tracking.

Why PyPI Supply Chain Attacks Keep Working

This isn’t just a LiteLLM story, it’s a pattern in the ecosystem.

Research shows that over 74% of malicious packages reach users through normal installs and many remain available even after detection. So attackers keep exploiting trust in package ecosystems, a lack of verification, and automated dependency updates.

This could have been a catastrophic attack. The potential exposure goes across cloud infrastructure access, production databases, internal APIs, and deployment pipelines. The attackers could have used those credentials to spin up cloud resources, exfiltrate customer data, inject backdoors into builds, or pivoted into internal networks.

Some reports suggest that hundreds of thousands of systems may have been affected. The only small silver lining is that there is a bug in the malware that may have limited its effectiveness.

This incident reinforces a few hard truths.

Trust is the weakest link. Open source works on trust from maintainers, registries, versioning, and a number of other resources. Attackers exploit that. Developer environments are high-value targets. Modern dev environments contain production credentials and infrastructure access. That makes them prime targets.

“Install-time” attacks are underrated. Many developers assume that malware requires execution, but in Python installing a package can execute code. AI tooling increases risk. AI frameworks like LiteLLM sit at the center of systems and handle sensitive data. They amplify impact when compromised.

The LiteLLM incident is just one example of a growing trend where supply chain attacks are increasing and PyPI is a frequent target. Plus attackers are getting more sophisticated. We’re seeing things like multi-stage malware, targeted credential theft, and ecosystem-wide campaigns, and it’s only going to get worse.

Table of Contents
Author
Milecia McGregor
SR Software Engineer
Open Source Insights Delivered Monthly