Auditing Client-Side GCP Keys: The Gemini Spillover Risk
Just caught the latest research from Truffle Security regarding the exposure of nearly 3,000 Google Cloud API keys. It’s a classic case of scope creep causing a security headache.
We all know API keys starting with AIza are often tossed into client-side code for Google Maps or Places. However, the report highlights that if these keys aren't restricted to specific APIs (like just Maps), they can be abused to authenticate to sensitive endpoints like Gemini Pro. If a developer enables the Generative Language API on a project without updating the key restrictions, that public-facing key becomes a tunnel for data exfiltration or model abuse.
Remediation Steps
If you have keys in the wild, you need to lock them down immediately in the GCP Console:
- Application Restrictions: Limit the key to specific domains (HTTP referrers) or IP addresses.
- API Restrictions: Only allow the specific APIs needed (e.g., disable "Generative Language API" if the key is just for Maps).
Detection
I spun up a quick Python script to scan our JS bundles for these patterns. It's a basic check, but effective for finding accidental commits:
import re
import os
# Regex for Google API keys (starts with AIza, generally 35-39 chars)
regex = r'AIza[A-Za-z0-9\-_]{35}'
def scan_directory(directory):
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith(('.js', '.html', '.')):
filepath = os.path.join(root, file)
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
matches = re.findall(regex, content)
if matches:
print(f"Found potential key in: {filepath}")
# Usage
scan_directory('./public')
For those of you managing larger environments, are you relying on manual code reviews, or have you integrated tools like truffleHog or gitleaks into your CI/CD pipeline to catch these before they hit production?
We integrated gitleaks into our GitHub Actions a while back, but it's noisy. The false positives on documentation files are annoying. However, it's better than the alternative.
I’d suggest enforcing application restrictions via Terraform rather than the console, to ensure state management. If you don't pin the key to a specific HTTP referrer, it's basically public.
hcl resource "google_api_key" "maps_key" { name = "maps-key" restrictions { api_targets { service = "maps_backend.googleapis.com" } browser_key_restrictions { allowed_referrers = [".mydomain.com/"] } } }
From a SOC perspective, monitoring the usage is critical even if you think the keys are safe. You can set up alerts in Cloud Logging for API calls originating from unexpected IPs or spikes in usage on the Generative Language API.
Here is a basic log sink query to monitor for generativelanguage.googleapis.com calls:
protoPayload.serviceName="generativelanguage.googleapis.com"
AND severity="ERROR"
If you see usage on a key that shouldn't be touching AI endpoints, assume compromise immediately and rotate.
The scary part isn't just the cost abuse (billing shock), it's the prompt injection/data leakage angle. If an attacker can call Gemini with your key, they might be able to extract training data or system prompts depending on the model configuration.
I've been moving clients away from API keys for anything sensitive and pushing them toward Workload Identity Federation where possible. For frontend maps, we proxy requests through a backend to keep the key off the client entirely. It adds a bit of latency, but it removes the risk entirely.
Scanning helps catch new commits, but you also need to verify that existing keys in your project are actually restricted. Legacy keys might have broad permissions that weren't an issue until new APIs like Gemini were enabled. You can audit this programmatically using the GCP client libraries. Here's a Python script to list keys and identify any missing API targets:
from google.cloud import api_keys_v2
def check_restrictions(project_id):
client = api_keys_v2.ApiKeysClient()
parent = f"projects/{project_id}/locations/global"
request = api_keys_v2.ListKeysRequest(parent=parent)
for key in client.list_keys(request=request):
if not key.restrictions.api_targets:
print(f"ALERT: Unrestricted key found - {key.display_name}")
Building on the legacy key issue, automating the audit is the only way to scale. You can use gcloud combined with jq to quickly identify keys that lack API restrictions, leaving them open to Gemini access.
gcloud services api-keys list --format="" | jq '.[] | select(.restrictions.apiTargets == null) | {name: .displayName, key: .keyString}'
Once you find these, enforce HTTP referrer restrictions immediately. This ensures the key only works from your specific domain, neutralizing the client-side exposure risk even if the key leaks.
For organizations managing large fleets, exporting your asset inventory to BigQuery is often more efficient than scripting against the API. This allows for SQL-based auditing to specifically flag keys where generativelanguage.googleapis.com is enabled but shouldn't be.
SELECT name, display_name
FROM `project.cloud_asset_inventory`
WHERE resource_type = "apikeys.googleapis.com/Key"
AND restrictions.api_targets LIKE '%generativelanguage.googleapis.com%'
It creates a centralized, compliance-friendly audit trail that can be scheduled to run nightly.
Building on the automation angle, you can specifically check for keys lacking API target restrictions which enables the Gemini abuse. Run this command to find keys exposing all services:
gcloud services api-keys list --format="" | jq '.[] | select(.restrictions.apiTargets == null) | {name, display}'
This quickly identifies any key suffering from that scope creep.
Don't overlook Application Restrictions. While API limits are essential, they don't stop unauthorized usage of the specific allowed services. Enforcing HTTP referrers ensures the key only functions from your authorized domain, providing a second layer of defense.
You can audit for keys missing these referrer checks using jq:
gcloud services api-keys list --format="" | jq '.[] | select(.restrictions.browserKeyReferrers == null) | .name'
This effectively mitigates the risk of keys scraped from your client-side code being used elsewhere.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access