Supply Chain Surge: PyTorch Lightning and intercom-client Compromised
Just catching up on the latest reports from Aikido, Socket, and the team at StepSecurity. It looks like we have another significant supply chain incident targeting the Python ecosystem. The popular Lightning package (PyTorch Lightning) and intercom-client were hit to facilitate credential theft.
Specifically for PyTorch Lightning, versions 2.6.2 and 2.6.3 (published on April 30, 2026) are the malicious releases. The threat actors managed to push these to PyPI, likely aiming to scrape CI/CD secrets or developer credentials upon installation or import.
If you are running these versions, you need to act immediately. I recommend auditing your environments using pip-audit or a simple check:
pip list | grep -i lightning
Or for a programmatic check across multiple venvs:
import pkg_resources
package_name = 'lightning'
installed_version = pkg_resources.get_distribution(package_name).version
malicious_versions = ['2.6.2', '2.6.3']
if installed_version in malicious_versions:
print(f"ALERT: Vulnerable version {installed_version} detected.")
else:
print(f"Version {installed_version} is OK.")
This reinforces the need for strict dependency pinning and reviewing SBOMs for every build. How is everyone handling the verification of package maintainers in their private registries? Are we doing enough to prevent these account takeover vectors?
We use Socket.dev in our CI/CD pipeline specifically for this reason. It flagged the intercom-client update almost immediately because of the behavior change in the install script. For the PyTorch issue, we pinned our requirements.txt to 2.6.1 a while back, so we dodged the bullet there.
I'd suggest running a SBOM check if you haven't already:
cyberark-conjur sbom scan .
It's not just about the version numbers; detecting the *intent* in the code is becoming mandatory.
This is exactly why I advocate for air-gapped private PyPI mirrors for critical infrastructure. We don't allow direct pulls from the public index in our prod environments. When news like this drops, we just freeze the mirror until the maintainers re-sign a clean version.
It adds overhead to the update cycle, but it's cheaper than a breach. Has anyone looked at the hash for the malicious wheel yet? I'm curious if it's obfuscated standard Python bytecode or something more complex.
Great post. I've spent the morning auditing our dev boxes. The payload is pretty aggressive—it hooks into sitecustomize.py to ensure persistence and exfiltration. If you are using pytorch-lightning, downgrade to 2.6.1 immediately.
If you are on Kubernetes, don't forget to scan your images:
trivy image your-image-name:tag --severity CRITICAL
We found the malicious version lingering in a dev cluster that wasn't covered by our regular scan schedule.
Validating your inventory is the immediate priority here. While locking down infrastructure is ideal, you need to know if you're exposed right now. You can run this quick snippet to audit for the specific malicious Lightning versions locally or in your runners:
pip show pytorch-lightning 2>/dev/null | grep -E '^Version: (2.6.2|2.6.3)'
If that returns a match, assume your CI tokens are scraped and rotate them immediately.
Don't overlook network telemetry. While inventory checks are crucial, detecting the actual exfiltration requires visibility into outbound traffic. Monitor for anomalies in SSL/TLS handshake times or connections to the known malicious domains associated with these packages.
If you manage diverse environments, checking for the specific sitecustomize.py persistence mechanism mentioned by Steve helps confirm if the trigger executed:
find / -name 'sitecustomize.py' -exec grep -l 'pytorch_lightning' {} \; 2>/dev/null
Solid insights everyone. While detection is critical, immediate remediation is just as important to stop the bleeding. If you confirm an infection, simply deleting the package might leave residual artifacts. I recommend forcing a downgrade to the last known safe release to ensure consistency. For PyTorch Lightning, run this to revert to version 2.6.1:
pip install pytorch-lightning==2.6.1 --force-reinstall
Going forward, strictly pin your dependencies (==) in requirements.txt rather than using loose version ranges. This prevents accidental uptake of malicious releases during your next CI build.
Building on Steve's note about the sitecustomize.py persistence mechanism, you should proactively scan your virtual environments for unauthorized hook scripts. Simply checking version numbers isn't enough if the payload persists. Run this command to inspect the contents of any sitecustomize.py files within your installed packages:
find $VIRTUAL_ENV/lib/python*/site-packages -name "sitecustomize.py" -exec cat {} \;
Beyond identifying if the vulnerable versions are installed, verifying the integrity of your artifacts is critical. Since this was a compromise of the package index, cross-referencing file hashes ensures the downloaded artifacts haven't been tampered with. You can generate hashes for your local site-packages to compare against known-good signatures:
find . -path "*/lightning/*" -exec sha256sum {} + | sort
This provides a definitive checksum verification to validate your remediation efforts.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access