- April 14, 2026
- Posted by: admin
- Category: BitCoin, Blockchain, Cryptocurrency, Investments
Researchers from the University of California set up a trap — a crypto wallet loaded with a small amount of Ether and connected to third-party AI routing infrastructure. One of the routers took the bait. The wallet was drained. The loss was under $50, but the implications reached far beyond the dollar amount.
That experiment was part of a broader study published recently, in which researchers tested 428 large language model routers — 28 paid and 400 free — collected from public online communities.
What they found was alarming. Nine routers were actively inserting malicious code into traffic passing through them. Two were using evasion techniques to avoid detection. Seventeen accessed AWS credentials belonging to the researchers. One stole actual cryptocurrency.
How Routers Became A Security Blind Spot
LLM routers sit between a developer’s application and AI providers such as OpenAI, Anthropic, and Google. They work as intermediaries, bundling API access into a single pipeline.
26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet.
We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts.
Check our paper: https://t.co/zyWz25CDpl pic.twitter.com/PlhmOYz2ec
— Chaofan Shou (@Fried_rice) April 10, 2026
The problem is structural. These routers terminate encrypted internet connections — known as TLS — and read every message in plain text before passing it along. That means anything sent through them, including private keys, seed phrases, and login credentials, is fully visible to whoever operates the router.
According to the researchers, the line between normal credential handling and outright theft is invisible from the client’s end. Developers have no way to tell the difference. A router that looks like a legitimate service can silently forward sensitive data to a third party without triggering any alarm.
Co-author Chaofan Shou said on X that 26 routers were found to be “secretly injecting malicious tool calls and stealing creds.”

The study also flagged what researchers called “YOLO mode” — a setting built into many AI agent frameworks that lets agents run commands without stopping to ask users for approval.
A malicious router combined with an auto-executing agent could move funds or exfiltrate data before a developer even notices something went wrong.
Crypto Security: Free Access Used As Bait
Reports from the study indicate that free routers are especially suspect. Cheap or no-cost API access appears to be used as an incentive to get developers to route traffic through infrastructure that may be harvesting credentials in the background.
Even routers that start out clean are not safe — the researchers found that previously legitimate routers can be quietly turned malicious once operators reuse leaked credentials through poorly secured relay systems.
The recommended fix for now is straightforward: keep private keys and seed phrases out of any AI agent session entirely.
For the long term, researchers say AI companies need to cryptographically sign their responses so that the instructions an agent executes can be mathematically traced back to the actual model — cutting off the ability of any middleman to tamper with them undetected.
Featured image from Xage Security, chart from TradingView