Security
May 13, 2026

Mini Shai-Hulud: Another npm Supply Chain Worm, and Why "Just Update" Isn't the Answer

The TanStack compromise shipped 84 malicious package versions with valid SLSA Build Level 3 provenance attestations. Cryptographic signing worked exactly as designed, and that's the problem.

Give me the TL;DR
Mini Shai-Hulud: Another npm Supply Chain Worm, and Why "Just Update" Isn't the Answer
For Qualys admins, NES for .NET directly resolves the EOL/Obsolete Software:   Microsoft .NET Version 6 Detected vulnerability, ensuring your systems remain secure and compliant. Fill out the form to get pricing details and learn more.

On May 11, 2026, attackers published 84 malicious versions across 42 TanStack packages on npm in a six-minute window, including TanStack Router, a package pulled down more than 12 million times a week. The broader Mini Shai-Hulud campaign also hit packages tied to Mistral AI, UiPath, OpenSearch, and Guardrails AI. The detonation was loud, and the cleanup is still underway.

If you ran npm install, pnpm install, or yarn install against an affected version on May 11, your build host should be treated as compromised. That's the short version. The longer version is more uncomfortable, because this attack didn't break npm's defenses, it walked through them while they were holding the door open.

Here's what happened, why it matters even if you don't use TanStack, and what you should do today.

What Actually Happened

This wasn't a typosquat. It wasn't a sketchy package nobody recognized slipping into a dependency tree. The attacker chained three known weaknesses in GitHub Actions and CI/CD trust boundaries to publish malicious code from inside TanStack's own trusted release pipeline.

The short version: a pull request from a forked repository was allowed to execute code in a workflow with elevated trust (pull_request_target). That execution poisoned GitHub Actions' shared build cache with malicious files. Days later, when maintainers merged unrelated PRs to main, the legitimate release workflow restored the poisoned cache, ran the planted code, and that code extracted an OIDC token from runner memory and published 84 malicious package versions directly to npm.

No maintainer password was phished. No 2FA prompt was intercepted. The packages carried valid cryptographic provenance attestations because, technically, they were published from the correct repository and the correct workflow. Researchers are calling this the first documented case of malicious npm packages shipping with valid SLSA Build Level 3 provenance attestations.

That distinction matters. Cryptographic provenance worked exactly as designed. It verified origin, not intent. The software supply chain equivalent of checking that a bomb really did come from the correct factory.

Once installed, the payload, a ~2.3 MB obfuscated JavaScript implant named router_init.js, masquerading as router initialization code, runs during npm install and goes hunting for credentials. AWS keys. GCP metadata. Kubernetes service account tokens. HashiCorp Vault tokens. ~/.npmrc. GitHub tokens from environment variables, the gh CLI, and .git-credentials. SSH private keys. 1Password and Bitwarden vault files. Cryptocurrency wallets.

Exfiltration is triple-redundant: a typosquat domain (git-tanstack[.]com), the decentralized Session messenger network, and GitHub API dead drops created with stolen tokens. The Session channel in particular is hard to disrupt, it's a privacy-focused peer-to-peer network whose traffic resembles ordinary encrypted messaging, with no traditional command-and-control server for your firewall to block.

The malware also tries to spread. It enumerates other packages the victim maintains and attempts to republish them with the same payload, spoofing the commits to look like they came from the Anthropic Claude Code GitHub App (it's not,  the email is a fabricated GitHub no-reply address). As a parting gift, it installs a persistent gh-token-monitor daemon that polls GitHub every 60 seconds. If it sees the stolen tokens get revoked, it attempts to wipe the developer's home directory.

Researchers attribute the campaign to TeamPCP, the same group behind the LiteLLM PyPI compromise in March and the earlier Trivy and Checkmarx campaigns. The fact that they keep succeeding is the story.

Why This Matters Even If You Don't Use TanStack

Three reasons.

First, TanStack Router is a foundational piece of the modern React ecosystem. If your application,  or any application you depend on, or any application they depend on uses it, the malicious version may already be in your lockfile. Many of the compromised packages are transitive dependencies, meaning developers may have executed the malicious code without ever directly installing a TanStack package.

Second, the attack mechanism is portable. Every repository using pull_request_target workflows that execute fork code, every workflow restoring writable caches across trust boundaries, every release pipeline with broadly scoped id-token: write permissions, and every organization using trusted publishing without strict provenance validation has some version of this exposure. TanStack isn't a uniquely careless maintainer team. They were a target because they have reach. The same architectural weaknesses exist across the JavaScript, Python, Go, and AI tooling ecosystems.

Third, the cadence is accelerating. The original Shai-Hulud worm across hundreds of npm packages last September. The qix compromise. LiteLLM on PyPI in March. The axios maintainer account hijack at the end of March. SAP-related npm packages in April. PyTorch Lightning's PyPI package later that month. The Intercom client. And now Mini Shai-Hulud at its largest scale yet. The attackers are getting better at hiding inside the trust signals defenders rely on instead of bypassing them outright.

The uncomfortable truth: cryptographic provenance, the thing the industry has spent years building toward as the answer to supply chain attacks; verified these malicious packages exactly as designed. It told the world these tarballs came from the right repository and the right workflow. They did. The workflow had been compromised. Provenance, signing, and trusted publishing are necessary but not sufficient. If the trusted pipeline itself becomes compromised, the ecosystem's strongest trust signals become camouflage.

What You Should Do Right Now

If you have any environment that ran npm install, pnpm install, or yarn install on May 11 and resolved a @tanstack/* package, treat it as compromised. That means CI runners, developer laptops, build servers, self-hosted GitHub Actions runners, shared package caches; anywhere the install lifecycle ran.

The incident is tracked as CVE-2026-45321, with the full affected-version list in GHSA-g7cv-rxg3-hmpx.

Audit Your Lockfiles

Check package-lock.json, pnpm-lock.yaml, and yarn.lock for any of the affected versions. If your lockfile predates the publish window and no fresh dependency resolution occurred, you may have been spared. If anyone on your team ran a fresh install during the affected window (including CI)  assume exposure until proven otherwise.

Rotate Credentials

Anything reachable from a potentially compromised install host should be treated as exposed:

  • AWS, Azure, and GCP credentials
  • Kubernetes service account tokens
  • HashiCorp Vault tokens
  • GitHub PATs and OAuth tokens
  • npm publish tokens
  • SSH keys
  • CI/CD secrets and environment variables
  • Anything sitting in .npmrc or dotfiles on a developer machine
  • 1Password and Bitwarden vault contents if the machine was unlocked

Rotation is annoying. Incident response after attackers pivot into your infrastructure is worse.

Pin Dependencies and GitHub Actions

Floating references like @latest, ^, @v6, or @main are how malicious updates ride trusted dependency relationships into your build. Pin:

  • npm packages to exact versions
  • GitHub Actions to commit SHAs rather than tag names
  • Container base images to immutable digests

This won't prevent every attack, but it shrinks the window in which a freshly published malicious version can land in your pipeline.

Audit Your CI/CD Workflows

Specifically, look for:

  • pull_request_target triggers that check out and execute fork code
  • Shared writable caches across trust boundaries
  • Broad id-token: write permissions
  • Long-lived automation tokens
  • Release jobs that restore artifacts produced by less-trusted workflows

If your workflow executes untrusted code with elevated trust, you have the same class of exposure TanStack did. Either restructure those workflows, or move them to pull_request (which sandboxes fork PRs by default).

Check Developer Tooling Directories

The malware embeds itself into .vscode/ and .claude/ configuration directories to maintain persistence across sessions, surviving even an npm uninstall of the malicious package. These directories are typically excluded from version control and rarely audited.

Look for unexpected hooks, injected tasks, modified settings, startup scripts, or suspicious extensions. And locate and remove the gh-token-monitor persistence daemon (~/Library/LaunchAgents/com.user.gh-token-monitor.plist on macOS, ~/.config/systemd/user/gh-token-monitor.service on Linux) before revoking any GitHub tokens, or you'll trigger the destructive wipe handler.

The Bigger Pattern

We keep writing some version of this post. We wrote one about axios. We wrote one about the playbook. We wrote one about Vercel. The specific mechanism shifts, a hijacked maintainer account here, a poisoned OAuth grant there, a compromised CI cache today,  but the underlying dynamic doesn't.

Modern software development runs on transitive trust between maintainers, CI systems, registries, automation platforms, and developers who install code they did not personally review. That trust scales beautifully when everyone is acting in good faith. It fails catastrophically when even one link;  a maintainer's email, a build cache, a third-party action, an AI assistant's OAuth scope,  gets compromised. Attackers have figured out you don't need to break a hardened production system if you can compromise the systems production already trusts.

For open source maintainers, the answer involves real investment: hardware-backed authentication, hardened CI/CD patterns, safer default GitHub Actions workflows, funded security reviews, and isolation between PR execution and release infrastructure. Fewer volunteer maintainers carrying enterprise-scale risk alone.

For enterprises, the answer is harder. Open source isn't "free." It's critical infrastructure maintained under wildly uneven security budgets. If your organization depends on millions of weekly downloads maintained by a handful of people, dependency management isn't just a developer convenience problem anymore. It's operational risk management.

The next Mini Shai-Hulud is already being staged on some fork somewhere. The question isn't whether it will happen, it's whether your environment survives the next install command.

Table of Contents
Author
Allison Vorthmann
Engineering Manager
Open Source Insights Delivered Monthly