Weekly Recap Reflections: Legacy Code, Modern Malware, and the Trust Crisis
Saw the weekly recap today, and the headline "Everything is dumb again" feels painfully accurate. It’s wild that we’re seeing the resurrection of Fast16—a Lua-based sabotage framework from 2005—popping up in OT environments. You’d think two decades would be enough to patch operational basics, but here we are.
On the IT side, the supply chain hits are getting relentless. The Bitwarden CLI compromise (bw1.js) and the malicious Checkmarx/KICS Docker images show that even our security tools are turning into vectors. We’ve pinned our Docker images, but how long until the upstream registry itself is the weak link?
The "Fake Help Desk" trend leveraging Microsoft Teams (UNC6692) is particularly concerning. They are abusing the inherent trust in internal collaboration tools to deliver SNOW malware. We’re seeing attackers move away from traditional email phishing to abusing "living off the land" binaries and trusted platforms.
Here is a KQL query I’ve started running to catch potential Teams abuse leading to suspicious PowerShell activity:
DeviceProcessEvents
| where InitiatingProcessFileName =~ "Teams.exe"
| where ProcessFileName in~ ("powershell.exe", "cmd.exe", "mshta.exe")
| where ProcessCommandLine contains "-enc" or ProcessCommandLine contains "DownloadString"
| project Timestamp, DeviceName, AccountName, InitiatingProcessCommandLine, ProcessCommandLine
Between this, the potential Federal backdoor discussions with XChat, and AI tracking gone rogue, it feels like the attack surface is expanding faster than we can inventory it.
How are you guys handling the verification of internal "support" requests? Are you moving to callback-only verification, or is there a technical control I'm missing for Teams abuse?
We moved to mandatory callback verification for any admin request initiated via chat, even internal ones. The amount of latency is annoying the dev team, but it stopped the SNOW delivery attempts we were seeing.
On the Fast16 note, it’s terrifying but fascinating. We scanned our OT controllers for Lua scripts (which shouldn't be there normally) and found some remnants from a vendor integrator that had no business being there. Legacy debt is a killer.
The Docker supply chain issue is why we have completely blocked internet access from our build runners. We use an air-gapped intermediate registry now. It adds overhead to the CI/CD pipeline, but after the Checkmarx/KICS incident, we can't trust docker pull directly from the public hub anymore.
As for Teams, we are looking at Conditional Access policies that restrict PowerShell execution unless the device is strictly managed.
Great KQL snippet. I modified it slightly to also look for msiexec spawns, since we've seen some initial access vectors using MSI installers pushed via chat file transfers.
DeviceProcessEvents
| where InitiatingProcessFileName =~ "Teams.exe"
| where ProcessFileName in~ ("powershell.exe", "cmd.exe", "mshta.exe", "msiexec.exe")
| summarize count() by ProcessFileName, DeviceName
It's a cat-and-mouse game, but blocking the execution chain is the only way until Microsoft hardens the app interface.
Seeing Fast16 return is a nightmare for OT defenders. Since Greg is already tweaking KQL for chat-based vectors, we should be hunting for unauthorized Lua interpreters too. In a standard ICS network, Lua shouldn't be spawning unless it's a specific HMI script. This query helps spot the framework loading:
DeviceProcessEvents
| where ProcessVersionInfoOriginalFileName has "lua" or FileName has "lua"
| where InitiatingProcessFileName !in~ ("scada_host.exe", "hmi_viewer.exe")
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access