Back to Intelligence

Hijacking Gemini: How Scammers Are Weaponizing AI Chatbots for "Google Coin" Fraud

SA
Security Arsenal Team
February 19, 2026
5 min read

In the rapidly evolving landscape of digital deception, the integration of Artificial Intelligence into social engineering has marked a terrifying new milestone. Cybercriminals are no longer just using AI to generate polished phishing emails; they are now deploying AI chatbots to act as aggressive, 24/7 sales agents for fraudulent schemes.

A recent campaign uncovered by researchers highlights this alarming trend. A sophisticated scam has emerged centering on a bogus cryptocurrency presale for "Google Coin." Unlike traditional scams that rely on static text or pressure tactics, this site features an embedded AI assistant—abusing the capabilities of Google's Gemini—to engage victims. This chatbot uses the perceived authority and linguistic fluency of AI to convince users that the investment is legitimate, ultimately funneling cryptocurrency directly into the attackers' wallets.

The Deep Dive: Automating Trust

The mechanics of this attack represent a significant evolution in the "pig butchering" and crypto-fraud playbook. Here is how the technical and psychological vectors align:

1. The Attack Vector: AI-Enabled Social Engineering The core vulnerability exploited here is not a missing patch in an OS, but rather the "Trust Gap." Users inherently trust AI interfaces to provide factual, neutral information. By integrating a chatbot that mimics the personality and responsiveness of legitimate models like Gemini, the attackers bypass the skepticism usually reserved for flashy banner ads.

2. Technical Implementation While the exact API integration method may vary, attackers often scrape legitimate model outputs or use jailbroken API keys to power these bots. The chatbot is programmed with a specific persona: confident, reassuring, and knowledgeable about the "benefits" of the fake token. It can handle objections in real-time, creating a sense of interactivity and legitimacy that static HTML pages cannot match.

3. The Funnel The victim lands on a professional-looking site (often via malicious SEO or phishing links). The "AI Assistant" pops up, answering questions about the presale, price predictions, and the (non-existent) partnership with Google. Once convinced, the victim is directed to send cryptocurrency to a wallet address controlled by the threat actors. The AI may even provide "confirmation" that the transaction is being processed, delaying the victim's realization of the theft.

Threat Hunting & Detection

Detecting these campaigns requires a shift from looking for malware signatures to hunting for indicators of compromise (IOCs) associated with brand impersonation and anomalous traffic patterns. Since this is a web-based threat, your telemetry should focus on network traffic and DNS resolution.

1. KQL (Microsoft Sentinel / Defender)

Use this query to hunt for suspicious domains combining high-value brand names with crypto-related keywords, and to detect high volumes of traffic to newly registered domains.

Script / Code
let BrandKeywords = dynamic(["google", "gemini", "microsoft", "openai"]);
let CryptoKeywords = dynamic(["coin", "crypto", "token", "presale", "ico"]);
DeviceNetworkEvents

| where Timestamp > ago(7d)
| where RemoteUrl has_any(BrandKeywords) and RemoteUrl has_any(CryptoKeywords)
| extend DomainParts = split(RemoteUrl, ".")
| extend TLD = DomainParts[array_length(DomainParts)-1], 

         SLD = DomainParts[array_length(DomainParts)-2]

| where SLD !in ("google.com", "microsoft.com") // Whitelist known safe domains
| project Timestamp, DeviceName, RemoteUrl, RemoteIP, InitiatingProcessFileName
| summarize RequestCount = count() by RemoteUrl, DeviceName
| where RequestCount > 5

2. PowerShell (Windows Endpoint)

This script hunts the local DNS cache for resolutions to suspicious domains that might indicate a user has visited a phishing site.

Script / Code
# Hunt for suspicious crypto-scam domains in DNS Cache
$SuspiciousKeywords = @("google-coin", "googletoken", "gemicrypto", "google-ico")

$DnsCache = Get-DnsClientCache

foreach ($Entry in $DnsCache) {
    foreach ($Keyword in $SuspiciousKeywords) {
        if ($Entry.Name -like "*$Keyword*") {

            Write-Host "[ALERT] Suspicious DNS Entry Found:" -ForegroundColor Red
            Write-Host "Domain: $($Entry.Name)"
            Write-Host "IP Address: $($Entry.Data)"
            Write-Host "TTL: $($Entry.TimeToLive)"
            Write-Host "---------------------------------"

        }
    }
}

3. Python (URL Fingerprinting)

A utility for security analysts to check a list of URLs for signs of brand impersonation using Levenshtein distance and keyword matching.

Script / Code

import re

from urllib.parse import urlparse


def is_suspicious_url(url, target_brand="google"):

    """
    Analyzes a URL for potential brand impersonation indicators.
    """
    try:
        parsed = urlparse(url)
        domain = parsed.netloc.lower()
        
        # Check for exact match (legitimate)
        if domain == f"www.{target_brand}.com" or domain == f"{target_brand}.com":
            return False, "Legitimate Domain"
            
        # Check for suspicious TLDs commonly used in scams (e.g., .xyz, .top)
        suspicious_tlds = ['.xyz', '.top', '.icu', '.cyou', '.shop']
        if any(domain.endswith(tld) for tld in suspicious_tlds):
            return True, f"Suspicious TLD detected: {domain.split('.')[-1]}"
            
        # Check for keyword stuffing in domain
        if target_brand in domain and ('crypto' in domain or 'coin' in domain):
            return True, "Brand keyword mixed with crypto keywords"
            
        return False, "Low Risk"
        
    except Exception as e:
        return True, f"Error parsing URL: {e}"

# Example Usage
suspicious_urls = [
    "http://google-coin-presale.xyz",
    "https://google.com",
    "http://www.google-crypto-token.top"
]

for url in suspicious_urls:
    is_suspicious, reason = is_suspicious_url(url)
    print(f"URL: {url}\nStatus: {'ALERT' if is_suspicious else 'OK'}\nReason: {reason}\n")

Mitigation Strategies

  • User Awareness Training: Educate employees and users specifically on "AI-Assisted Phishing." Emphasize that chatbots on websites are not neutral parties; they can be programmed to lie.
  • Financial Verification Policies: Implement strict policies requiring verification of crypto-investment opportunities through official, offline channels (e.g., calling a verified corporate number) before any transaction.
  • DNS Filtering: Utilize enterprise DNS filtering solutions to block newly registered domains (NRDs) and domains containing high-risk keyword combinations (e.g., brand names + "crypto/loan").

Security Arsenal Plug

As AI-driven threats become more sophisticated, traditional defenses are struggling to keep up. At Security Arsenal, we stay ahead of the curve by anticipating the next generation of attack vectors. Our Managed Security services utilize advanced threat intelligence to detect and block emerging AI-phishing campaigns before they reach your users. Furthermore, our Red Teaming exercises can now simulate AI-enabled social engineering attacks to test your organization's resilience against these deceptive tactics. Don't let automated trust exploit your vulnerabilities—partner with us to fortify your human and digital firewall.

Is your security operations ready?

Get a free SOC assessment or see how AlertMonitor cuts through alert noise with automated triage.