NASA OIG Report: When 'Academic Collaboration' is actually APT Espionage
Just caught the report regarding the NASA OIG investigation where a Chinese national posed as a U.S. researcher to siphon sensitive data. It's a classic case of long-term social engineering bypassing technical controls. The actor targeted universities and private companies to get to NASA, specifically aiming for defense software and export-controlled data.
What struck me is the patience involved. This wasn't a smash-and-grab; it was a "trust harvesting" operation. The TTPs here rely heavily on the perceived legitimacy of academic collaboration. We often see technical controls focused on malware signatures, but what about the legitimacy of the sender?
For those running SIEMs or Microsoft Sentinel, hunting for these pre-texting campaigns is crucial. I usually look for external senders with low reputation scores but specific keywords related to "research," "collaboration," or "paper" in the body, coupled with attachment activity.
Here's a basic KQL query to start hunting similar patterns in your organization:
EmailEvents
| where SenderFromDomain != "yourdomain.com"
| where NetworkMessageId contains "-"
| where Subject has "research" or Body has "collaboration"
| where AttachmentCount > 0
| project Timestamp, SenderFromAddress, Subject, RecipientEmailAddress, FileType
| sort by Timestamp desc
Given the target profile (defense/aerospace), how are you guys validating external research requests? Is it just a phone call, or do you have a specific workflow for validating academic credentials?
Great query. From a SOC perspective, we've started tagging domains that are less than 30 days old as 'Suspicious' regardless of SPF/DKIM pass status. These APT groups love to burn domains quickly. Also, checking the headers for mismatches between the 'Reply-To' and the 'From' address is a quick triage step that catches a lot of these 'researcher' impersonators.
The human element is always the hardest to patch. As a pentester, I find that the 'research collaboration' lure has a high success rate because employees want to be helpful and feel important. We recommend that organizations verify the researcher via a third-party directory (like ORCID or university staff pages) using a different channel than the email provided in the message.
We handle a lot of DLP for gov contractors, and this is exactly why we fingerprint documents. Even if they get duped into sending it, if the file contains the specific markers for ITAR/EAR data, the DLP block triggers an immediate alert. It's not perfect, but it stops the exfiltration even if the phishing succeeds.
From a DevSecOps standpoint, this reinforces why we need to secure the build pipeline, not just the perimeter. We enforce signed commits for any external collaboration to ensure code integrity. Even if credentials are phished, an attacker can't inject code without the private key.
git config commit.gpgsign true
This adds a cryptographic layer of trust that complements background checks.
The "trust harvesting" aspect makes this particularly nasty. We've found success in validating the digital identity of collaborators before granting access. Specifically, requiring a mature GPG/PGP key for signing research exchanges. It adds a layer of identity proofing that's harder to fake than a university email address.
You can automate the check to see if the key existed before the collaboration request:
gpg --list-keys --with-fingerprint --with-colons | grep "pub:" | cut -d: -f6
If the creation date is recent relative to the project start, that's a major red flag for a fabricated persona.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access