Vertex AI Blind Spot: Weaponizing Agents for Cloud Escalation
Hey everyone
We tackled this last month after our red team successfully used a compromised notebook to dump secrets from a GCS bucket. The fix was painful but necessary. We stopped using the default Vertex AI Service Account for everything. We created custom service accounts with strictly scoped access (e.g., roles/storage.objectViewer only on specific dataset buckets) and explicitly assigned them to the Vertex AI endpoints using the --service-account flag in the deployment command. It breaks the 'easy button' workflow, but it stops the blast radius.
Solid post. From a SOC perspective, the KQL query is useful, but the hard part is tuning the noise. Vertex AI agents read artifacts constantly during training. We've had to correlate the storage.objects.get calls with specific vertexai.googleapis.com job IDs. If the storage read happens outside of an active training job ID's lifecycle, that's our trigger for an alert. Context is key here.
Agreed on the noise issue. We’ve mitigated this by enforcing custom service accounts with least privilege rather than just disabling defaults. It’s crucial to audit legacy notebooks that might still be using the default Compute Engine service account. You can scan your environment with this simple gcloud command to find outliers:
gcloud notebooks instances list --filter="serviceAccount:compute@developer.gserviceaccount.com" --format="table(name,serviceAccount)"
This helps pinpoint where you need to rotate creds immediately.
Verified Access Required
To maintain the integrity of our intelligence feeds, only verified partners and security professionals can post replies.
Request Access