Supply Chain Alert: Mini Shai-Hulud Worm Targets TanStack & AI Ecosystem
TeamPCP is back with a vengeance under the 'Mini Shai-Hulud' campaign. This isn't just another random dependency confusion attack; they've successfully breached packages from TanStack, UiPath, Mistral AI, and others. The fact that they got into the TanStack ecosystem specifically is alarming given how ubiquitous that router is in modern React apps.
The technical pivot here is interesting. The compromised npm packages are dropping router_init.js. Based on the analysis, this script is heavily obfuscated and serves a 'profiling' function rather than immediate exploitation. It gathers data on the execution environment—likely identifying if it's a dev laptop, a CI/CD runner, or a production server—to decide whether to deploy the next stage. This makes dynamic analysis a nightmare since the payload might not detonate in a sandbox.
You should be scanning for this specific filename immediately if you use these libraries. Here’s a quick command to grep through your installed modules for any references to this file or suspicious init scripts:
grep -r "router_init" ./node_modules --include="*.js" -l
And for those managing Python environments, verify your Guardrails AI and OpenSearch package hashes. The PyPI vector is just as active here.
How is everyone handling the verification of 'maintainer accounts' for these high-trust libraries? It feels like 2FA isn't stopping account takeovers anymore.
The profiling aspect is the real kicker here. We saw a similar behavior with the recent dependency attacks targeting CI pipelines. If the malware detects it's in a GitHub Actions or Jenkins environment, it stays dormant to avoid automated scanners, only executing on a victim's local machine.
I've updated our YARA rules to include the specific hex patterns found in the obfuscated router_init.js. If you have a SIEM, I recommend creating a detection rule for node processes spawning bash or sh immediately post-install.
We use TanStack heavily for our dashboarding. Our automated pipelines actually caught this because we pin specific checksums in our lockfiles, but the alert fatigue on 'dependency updates' is real.
If you aren't using npm audit --audit-level=moderate as a gate in your CI/CD, now is the time. It won't catch everything, but it would have flagged the version mismatch for the compromised TanStack releases.
This validates why I refuse to use AI wrappers like Guardrails AI directly from public repos without vendoring them first.
For those looking to clean up, simply running npm ci might not be enough if your cache is poisoned. Purge the cache explicitly:
npm cache clean --force
rm -rf node_modules package-lock.
npm install
Since the malware profiles the environment, we have to assume it scraped CI/CD tokens if it detected a runner. Cleaning the repo isn't enough; you must treat this as an identity incident. Rotate all cloud keys and PATs used in that environment immediately.
To check for suspicious credential usage in AWS, you can query CloudTrail:
aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=GetAuthorizationToken --start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%SZ)
Validating the artifact integrity is crucial here. Beyond lockfiles, scanning for the specific payload router_init.js is a quick win for immediate triage. I recommend running this recursive search in your project root to catch any lingering instances:
find . -path "*/node_modules/*" -name "router_init.js"
If found, isolate the environment immediately. Also, enforcing ephemeral build agents ensures that even if profiling occurs, the persistence window is minimized.
Since we're dealing with an environment-aware payload, network egress blocking is just as critical as file scanning. If that router_init.js executes before you find it, it's likely beacons out.
I'd suggest checking your perimeter logs for POST requests to unknown endpoints originating from build nodes. For those using EDR or just want a quick host check, this PowerShell snippet helps find recently modified JS files in node_modules:
Get-ChildItem -Path .\node_modules -Filter *.js -Recurse | Where-Object { $_.LastWriteTime -gt (Get-Date).AddDays(-1) }
It catches the "new" files that slipped in past standard dependency updates.
To complement the filesystem scan, block egress traffic at the network layer. If the worm executes, it likely phones home for profiling data or further instructions. I suggest auditing your DNS query logs for any high-entropy domain resolutions or connections to known TeamPCP infrastructure. If you are auditing Linux runners, you can check active connections for suspicious anomalies:
ss -tulwnp | grep -E 'node|npm'
Solid advice on identity rotation, Yuki. To narrow down the breach window without relying solely on AV alerts, inspect your npm logs for installation timestamps of the affected packages. This helps pinpoint exactly when the environment was compromised. You can run this to find recent installs in your environment:
grep -i "install" ~/.npm/_logs/*.log | tail -n 50
Cross-referencing these timestamps with your CI/CD run IDs provides a precise timeline for the forensic investigation and determines if tokens were exposed.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access