Hidden Web Attacks Hijack AI Agents to Steal Payments
Google reports a 32% surge in indirect prompt injection attacks, where malicious web pages embed invisible instructions for AI agents, tricking them into executing unauthorized PayPal and Stripe transactions. No legal framework defines liability when agents act on third-party commands.
Quick Take
Attacks hide instructions in HTML invisible to humans, readable by AI.
Payloads included fully specified PayPal payment instructions.
No clear liability when AI agents execute malicious commands.
Threat expected to grow as agentic AI systems become more capable.
Market Impact Analysis
NeutralArticle highlights AI security vulnerabilities with financial transaction risks, but lacks direct crypto market implications.
Speculation Analysis
Key Takeaways
- Hidden web attacks on AI agents surged 32% between November 2025 and February 2026, Google confirms.
- Malicious payloads contained fully specified PayPal and Stripe transactions invisible to humans.
- No legal framework exists for liability when an AI agent executes unauthorized payments.
- Enterprise AI systems with payment capabilities face escalating risk as attacks grow more sophisticated.
What Happened
Google uncovered a sharp rise in indirect prompt injection attacks targeting AI agents browsing the web. Attackers embed hidden instructions in HTML—shrunk text, metadata, or comments—visible only to AI that reads full page source. When an agent processes these pages, it can be tricked into executing commands like sending payments via PayPal or Stripe. Google’s scans of 2–3 billion pages per month found a 32% jump in malicious cases between November 2025 and February 2026. Security firm Forcepoint simultaneously reported in-the-wild payloads with fully specified transaction steps, including the “ignore all previous instructions” jailbreak. Because agents use legitimate credentials and behave normally, attacks leave no anomalous logs.
The Numbers
Google scans 2–3 billion crawled web pages each month for hidden prompt injections. The 32% surge in detected attacks signals rapid escalation. Among dangerous payloads found: one instructed an agent to return a user’s IP address and passwords; another attempted to format the AI’s machine. Forcepoint’s financial payloads were more advanced: a complete PayPal transaction script with step-by-step routing instructions, and a Stripe donation link embedded via meta tag namespace injection paired with persuasion amplifiers. A third payload appeared designed solely to probe which agents are vulnerable—reconnaissance before a larger strike. Critically, no legal framework decides who is liable when an agent executes malicious commands sourced from a third-party website.
Why It Happened
As agentic AI systems gain payment and browsing capabilities, attackers adapt classic injection techniques. By hiding payloads in plain site HTML, they exploit the AI’s full-page reading while humans see nothing. The same “ignore previous instructions” prompt jailbreaks used against chatbots now steer financial actions. Because agents operate with valid credentials and generate normal-looking logs, malicious activity is hard to detect. The economic incentive is high: a single successful payload can divert funds instantly. Enterprises racing to deploy AI agents often overlook these subtle, code-level threats, and the attack surface widens with every new agent given financial access.
Broader Impact
This isn’t just a technical vulnerability—it exposes a legal vacuum. No regulatory framework determines liability when an authorized AI agent executes a command planted by a malicious website. As agents expand into cross-chain crypto payments, the same attack vector could compromise decentralized finance. Until standards emerge, every agent with financial access is a soft target, and the window to build defenses is closing fast.
What to Watch Next
- Regulatory push: Watch for U.S. and EU proposals on AI agent liability and security standards.
- Defensive tools: Security firms are developing agent firewalls that screen webpage instructions before execution.
- Web standards shift: Expect discussion around “agent-safe” HTML practices that block hidden prompts by default.
This article is for informational purposes only and does not constitute financial advice.
Always late to trends?
Join for the latest news, insights & more.
Disclaimer: Bytewit is an independent media outlet that delivers news, research, and data.
© 2026 Bytewit. All Rights Reserved. This article is for informational purposes only.