Google’s Warning: AI has become an Attack Surface

On August 16, 2025, Google issued a rare security advisory. The alert? Indirect prompt injections. A blog post cited on Yahoo News highlighted this alert, noting that some 1.8 billion Gmail users could be affected by the emerging threat.

This is not a traditional phishing attack. There are no suspicious links, no “click here.” Instead, malicious instructions are hidden inside normal-looking emails, calendar invites, or documents. When an AI assistant like Gemini processes the content, those hidden prompts execute—sometimes asking for passwords, sometimes generating fake alerts, sometimes even triggering connected systems.

The implications for enterprises are significant:

  • The target is no longer the employee, but the AI they use.
  • Standard defences like mail filters and awareness training don’t catch invisible instructions.
  • AI-driven workflows (summaries, ticketing, automation) can be hijacked without warning.

For executives, the message is clear: AI is not just a productivity tool—it’s a potential point of compromise. Governance, controls, and oversight must adapt accordingly.

The boardroom question is no longer “How do we deploy AI?” but “How do we secure the AI we already depend on?”

Indirect prompt injections

Indirect prompt injections are a new class of attacks that hide malicious instructions in plain sight that are emerging as a real threat to GenAI. Unlike traditional phishing, where a user must click a malicious link, prompt injections exploit the trust we place in AI assistants themselves.

Hidden text in an email, a calendar invite, or even a shared document can instruct an AI to perform dangerous actions: ask for your password, connect to fraudulent support numbers, or even manipulate IoT devices. Google explains in its Google Security Advisory that these attacks “operate when malicious instructions are embedded into data an AI system consumes,” and may run alongside the AI’s legitimate task.

The danger here is subtle but profound. A poisoned calendar invite doesn’t look suspicious to the human eye. But once an AI assistant processes it, hidden prompts may execute unintended actions without any conscious “click” or consent from the user. The attack surface has shifted: we are no longer just defending humans from phishing, but defending machines from being phished.

It is an illustration of AI being hacked in plain sight. It tells us something important: AI can be manipulated at its core. If attackers can “talk” to our AI systems through poisoned data, then every AI-driven workflow becomes a potential attack vector.

3 Replies to “Google’s Warning: AI has become an Attack Surface”

Leave a comment