Technology & InnovationNeutral
47

Hidden Web Attacks Hijack AI Agents to Steal Payments

Google reports a 32% surge in indirect prompt injection attacks, where malicious web pages embed invisible instructions for AI agents, tricking them into executing unauthorized PayPal and Stripe transactions. No legal framework defines liability when agents act on third-party commands.

DecryptJose Antonio Lanz

Quick Take

1

Attacks hide instructions in HTML invisible to humans, readable by AI.

2

Payloads included fully specified PayPal payment instructions.

3

No clear liability when AI agents execute malicious commands.

4

Threat expected to grow as agentic AI systems become more capable.

Market Impact Analysis

Neutral

Article highlights AI security vulnerabilities with financial transaction risks, but lacks direct crypto market implications.

Timeframemedium

Speculation Analysis

Factuality85/100
RumorsVerified
Speculation Trigger20/100
MinimalExtreme FOMO

Key Takeaways

  • Hidden web attacks on AI agents surged 32% between November 2025 and February 2026, Google confirms.
  • Malicious payloads contained fully specified PayPal and Stripe transactions invisible to humans.
  • No legal framework exists for liability when an AI agent executes unauthorized payments.
  • Enterprise AI systems with payment capabilities face escalating risk as attacks grow more sophisticated.
Attack Surge 32% Nov 2025 – Feb 2026
Pages Scanned 2-3B/Month Google’s detection net
Real Payloads PayPal & Stripe TXs Invisible to humans
Liability Framework None For agent-executed actions

What Happened

Google uncovered a sharp rise in indirect prompt injection attacks targeting AI agents browsing the web. Attackers embed hidden instructions in HTML—shrunk text, metadata, or comments—visible only to AI that reads full page source. When an agent processes these pages, it can be tricked into executing commands like sending payments via PayPal or Stripe. Google’s scans of 2–3 billion pages per month found a 32% jump in malicious cases between November 2025 and February 2026. Security firm Forcepoint simultaneously reported in-the-wild payloads with fully specified transaction steps, including the “ignore all previous instructions” jailbreak. Because agents use legitimate credentials and behave normally, attacks leave no anomalous logs.

The Numbers

Google scans 2–3 billion crawled web pages each month for hidden prompt injections. The 32% surge in detected attacks signals rapid escalation. Among dangerous payloads found: one instructed an agent to return a user’s IP address and passwords; another attempted to format the AI’s machine. Forcepoint’s financial payloads were more advanced: a complete PayPal transaction script with step-by-step routing instructions, and a Stripe donation link embedded via meta tag namespace injection paired with persuasion amplifiers. A third payload appeared designed solely to probe which agents are vulnerable—reconnaissance before a larger strike. Critically, no legal framework decides who is liable when an agent executes malicious commands sourced from a third-party website.

Why It Happened

As agentic AI systems gain payment and browsing capabilities, attackers adapt classic injection techniques. By hiding payloads in plain site HTML, they exploit the AI’s full-page reading while humans see nothing. The same “ignore previous instructions” prompt jailbreaks used against chatbots now steer financial actions. Because agents operate with valid credentials and generate normal-looking logs, malicious activity is hard to detect. The economic incentive is high: a single successful payload can divert funds instantly. Enterprises racing to deploy AI agents often overlook these subtle, code-level threats, and the attack surface widens with every new agent given financial access.

Broader Impact

This isn’t just a technical vulnerability—it exposes a legal vacuum. No regulatory framework determines liability when an authorized AI agent executes a command planted by a malicious website. As agents expand into cross-chain crypto payments, the same attack vector could compromise decentralized finance. Until standards emerge, every agent with financial access is a soft target, and the window to build defenses is closing fast.

What to Watch Next

  • Regulatory push: Watch for U.S. and EU proposals on AI agent liability and security standards.
  • Defensive tools: Security firms are developing agent firewalls that screen webpage instructions before execution.
  • Web standards shift: Expect discussion around “agent-safe” HTML practices that block hidden prompts by default.

Source: Decrypt

This article is for informational purposes only and does not constitute financial advice.

SourceRead the full article on Decrypt
Read full article

Always late to trends?

Join for the latest news, insights & more.

Disclaimer: Bytewit is an independent media outlet that delivers news, research, and data.

© 2026 Bytewit. All Rights Reserved. This article is for informational purposes only.

Read Next

Most Read

🏛️
Institutional & Investment NewsBullish
77

BitMine Adds 101K ETH Despite $6.5B Unrealized Losses

BitMine acquired an additional 101,901 ETH, bringing total holdings to 5.08 million ETH, despite sitting on over $6.5 billion in unrealized losses. The company stakes 3.7 million ETH for yield, while Ether prices show early signs of stabilization above $2,400 after a prolonged downturn.

ETH
90% confidence
Apr 27, 2026, 6:56 PM UTC · Cointelegraph
AI Agent Payment Thefts Spike 32% via Hidden Web Attacks | Bytewit