SANDWORM_MODE: New npm supply chain worm targeting CI/CD secrets
Just saw the latest from Socket about the SANDWORM_MODE campaign - another "Shai-Hulud-like" supply chain worm hitting the npm ecosystem. They've identified at least 19 malicious packages actively harvesting crypto keys, CI secrets, and API tokens.
The attack vector follows the same pattern we've seen before - typosquatting and package confusion. What's concerning is the persistence of these campaigns. The worm-like behavior means it can propagate across dependencies once it gains a foothold.
Looking at the technical writeups, the malware specifically targets:
.envfiles and environment variables~/.aws/credentialsfor AWS access keys- Crypto wallets in standard paths
- CI/CD configuration files
- SSH keys and
known_hosts
For detection, I'm running this query against our CI/CD logs:
SecurityEvent
| where EventID == 4688
| where ProcessCommandLine contains "npm install"
| where ProcessCommandLine has_any ("crypto", "key", "secret", "token", "wallet")
| project TimeGenerated, Computer, Account, ProcessCommandLine
Also checking package. files for suspicious dependencies:
#!/bin/bash
grep -r "\"dependencies\"" ./package. | jq -r '.dependencies | keys[]' | xargs -I {} npm view {} author
Anyone else seeing activity related to this? What's your approach for validating npm packages before pulling them into production builds? Are you using lockfile-only installs or doing full audits?
We caught similar activity last week in our CI/CD pipeline. One dev pulled a package that had suspicious postinstall scripts.
We're now enforcing --ignore-scripts flag in our npm installs:
npm install --ignore-scripts
Also added this to our `.npmrc`:
ignore-scripts=true
Socket's registry proxy has been decent for blocking known malicious packages, but the zero-day window is still scary. We're looking at integrating OSSF Scorecard checks into our PR workflows as an additional gate.
Red team perspective: These supply chain attacks are devastating because they bypass traditional network security controls. I've seen environments with hardened perimeters still compromised because a developer ran npm install without scrutiny.
If you want to test your detection capabilities, try this in a non-prod environment:
npm audit --audit-level=moderate --production
npm ls -- | jq '.dependencies | to_entries[] | select(.value.resolved != null)'
The real issue is most orgs don't have visibility into what's actually executing during package installation. Process monitoring is key - if you're not watching for child processes spawned by npm, you're flying blind.
From the infrastructure side, we're locking down CI runners significantly. All npm installs now run in ephemeral containers with no access to:
- Vault/credential store
- Production network segments
- Developers' home directories
We also validate against npm's public registry before allowing internal installs:
npm config get registry
npm view repository.url
It's extra friction for the dev team, but after seeing what these worms can do, they understand the tradeoff. The key is automating the checks so it doesn't slow down the build pipeline.
Solid advice on containment. To complement that, we’ve started enforcing strict dependency provenance. Verifying that the package author signed the release makes it significantly harder for typosquatters to succeed, as they lack the private keys.
You can audit your environment for signed packages easily:
npm audit signatures
It’s not foolproof yet, but it filters out a massive amount of low-effort supply chain noise before it hits your runners.
To complement the containment strategies, we’re enforcing strict registry scoping via .npmrc. This prevents accidental public resolution of internal packages and mitigates typosquatting by locking specific scopes to private registries. We also generate npm-shrinkwrap. on every build to ensure absolute version control of the dependency tree.
npm shrinkwrap
This ensures that even if a developer accidentally types a wrong package name, the scope configuration handles the resolution logic before the install happens.
Great thread. To add a detection layer to the containment strategies mentioned, we've started instrumenting our CI builds with strace. It helps identify hidden behaviors even if a script manages to execute or bypass standard checks.
strace -f -e trace=file,network -o strace.log npm install
We then scan the output for access to sensitive paths like `~/.aws` or `~/.ssh`. It’s a bit of a blunt instrument, but it catches a lot of these "living-off-the-land" attempts early.
Generating an SBOM for every build has been a game-changer for rapid response. When alerts like SANDWORM_MODE drop, we can immediately query our software inventory to check if we're running the affected packages or their dependencies. It speeds up triage significantly compared to manually grepping package files.
syft dir:. -o > sbom.
It doesn't stop the infection, but it drastically reduces the time to detection and remediation.
Solid advice on isolation. To add a layer of compliance, we're mandating SBOM generation for every build. This creates a manifest of all transitive dependencies, making it easier to audit the supply chain retroactively if a new CVE surfaces. We use syft for this:
syft dir:. -o spdx- > sbom.
This traceability is crucial when investigating how a worm might have slipped through other controls.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access