ForumsExploitsUNC6426 to AWS Root: Analyzing the 72-Hour nx npm Breach

UNC6426 to AWS Root: Analyzing the 72-Hour nx npm Breach

ICS_Security_Tom 3/11/2026 USER

Just caught the report on UNC6426 leveraging the nx npm supply chain incident, and the timeline is absolutely terrifying. We’re talking full AWS admin access within 72 hours of the initial compromise.

What stands out isn't just the dependency confusion aspect, but how quickly they pivoted from a compromised package to stealing a developer's GitHub token. Once they had that token, the lateral movement into the cloud environment was instant. It’s a classic case of how software supply chain attacks are really just stepping stones to identity theft.

If you're running builds with nx, you need to audit your environment immediately. Check your package-lock. against known bad hashes and review your GitHub audit logs for unauthorized OAuth access.

I’ve whipped up a quick KQL query to run in Sentinel to correlate GitHub Actions AssumeRole events with subsequent IAM privilege escalations. You want to look for OIDC connections that immediately spawn new policies or users.

AWSCloudTrail
| where EventName == "AssumeRoleWithWebIdentity"
| where SourceIpAddress contains "192.30.252" // GitHub IP range
| join kind=inner (
    AWSCloudTrail
    | where EventName in ("CreateUser", "AttachUserPolicy", "PutRolePolicy")
) on SessionMinderSessionId
| project TimeGenerated, EventName, SourceIpAddress, UserIdentityArn

The 72-hour window is brutal. How is everyone handling GitHub token rotation in your CI/CD pipelines? Are you relying on manual reviews or automated secrets scanners to catch leaked PATs?

K8
K8s_SecOps_Mei3/11/2026

We actually stopped using long-lived Personal Access Tokens (PATs) for this exact reason. We moved everything to OIDC (OpenID Connect) between GitHub Actions and AWS. It limits the blast radius significantly because the token is short-lived and only scoped to the specific repository triggering the workflow.

If you haven't made the switch, I highly recommend it. It doesn't stop the initial package compromise, but it prevents the attacker from using a stolen token to access anything else in your org.

DL
DLP_Admin_Frank3/11/2026

Good catch on the KQL. From a pentester perspective, the real issue here is the lack of segmentation between dev workstations and production cloud environments. Once UNC6426 had that dev token, they walked right in the front door.

You should also be running npm audit as a pre-commit hook. It won't catch zero-days, but it stops the low-hanging fruit.

DE
DevSecOps_Lin3/11/2026

We use a tool called syft to generate SBOMs (Software Bill of Materials) for every build. It doesn't stop the attack, but it drastically reduces the incident response time. When the news dropped, we just grepped our SBOMs for nx to see if we were exposed.

syft dir . -o  | jq '.artifacts[] | select(.name=="nx")'

Visibility is half the battle in these supply chain messes.

ZE
ZeroTrust_Hannah3/11/2026

The speed here is terrifying, so detection speed is critical. OIDC helps, but you need automated response. We implemented GuardDuty with automation that immediately revokes credentials if IAM privileges are escalated or new users are created.

To catch this via CloudTrail, watch for specific high-risk API calls dev tokens shouldn't make:

aws logs put-metric-filter --log-group-name CloudTrail --filter-name IAMChanges --filter-pattern "{$.eventName = CreateUser*}"

Automating the lockdown is the only way to beat a 72-hour pivot.

Verified Access Required

To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.

Request Access

Thread Stats

Created3/11/2026
Last Active3/11/2026
Replies4
Views152