How one trader used morse code to trick Grok into sending them billions of crypto tokens from its verified wallet

Make preferred on

Tagging @grok in an X post plus a few dots and dashes was all that was needed last night for a bad actor to pickpocket a verified crypto wallet without ever touching the private keys.

Agentic token launchpad, Bankrbot reported on May 4 that it had sent 3 billion DRB on Base to the recipient 0xe8e47...a686b.

The funds came from a wallet assigned to X’s AI, Grok, and were sent to an unauthorized wallet owned by a bad actor. This Base transaction shows the on-chain transfer path behind the post.

CryptoSlate’s review of X posts around the incident points to a reported command path that began with Morse-code obfuscation. Grok decoded the text into a clean public instruction tagging @bankrbot and asking it to send the tokens, while Bankrbot handled the command as executable.

The exposed layer was the handoff from language to authority. A model that decodes a puzzle, writes a helpful reply, or reformats a user’s text can become part of a payment rail when another agent treats that output as valid.

For crypto investors, this transfer should turn AI-agent risk from an abstract security debate into a wallet-control problem. A public command can become spend authority when one system treats model output as an instruction and another system has permission to move tokens.

Wallet permissions, parser, social trigger, and execution policy become layers of attack vectors.

Related Reading

The crypto winners from AI are not AI coins as agents start spending autonomously

The rise of AI agents is creating a simple question with huge implications for crypto: how does software pay?

Mar 28, 2026 · Andjela Radmilac

Posts and transaction context reviewed by CryptoSlate put the DRB transfer at roughly $155,000 to $200,000 at the time, with DebtReliefBot price data providing market context for the token.

Reports reviewed by CryptoSlate also say most funds are being returned, and some DRB is reportedly retained as an informal bug bounty. That outcome reduced the loss, but it also showed how much the recovery depended on post-transaction coordination rather than pre-transaction limits.

Bankr developer 0xDeployer said 80% of the funds had been returned, while the remaining 20% would be discussed with the DRB community. That confirmed the partial recovery while leaving the final treatment of the retained funds unresolved.

0xDeployer also said Bankr automatically provisions an X wallet for every account that interacts with the platform, including Grok. According to the post, that wallet is controlled by whoever controls the X account rather than by Bankr or xAI staff.

Read More:  The SEC finally admits US crypto chaos was caused by its own regulatory turf wars

How public text became spend authority

The reported path had four steps. First, the attacker identified a Bankr Club Membership NFT in a Grok-associated wallet before the incident.

CryptoSlate’s review indicates that it expanded the wallet’s transfer privileges inside the Bankr environment. The Bankr access page describes membership and access mechanics today, placing the NFT claim in the broader permission layer rather than making it the whole explanation.

Second, the attacker posted a message on X containing Morse code, with additional noisy formatting. Posts around the incident described a Morse-code prompt injection, while the now-deleted prompt was unavailable for us to review directly.

The reported vector was Morse code with possible array or concatenation tricks mixed in.

Third, Grok’s public response reportedly translated the obfuscated text into plain English and included the @bankrbot tag. In that account, Grok functioned as a helpful decoder.

The risk appeared after the text left Grok and entered a bot interface that watched public output for formatted commands.

Fourth, Bankrbot treated the public command as executable and broadcast a token transfer. Bankr and Base describe an agent wallet surface that can use wallet functionality for transfers, swaps, gas sponsorship, and token launches, while natural-language token sends fit directly into that product surface.

Bankr’s broader onchain AI assistant documentation shows why the boundary between chat instructions and transaction authority needs explicit policy.

Step Surface Observed action Control that would have changed the outcome
Privilege setup Wallet or membership layer Access was reportedly expanded before the prompt appeared Separate privilege review for new wallet capabilities
Obfuscation X post Morse code put a payment instruction inside obfuscated text Decode-and-classify checks before replies are published
Public output Grok reply The clean command was exposed with a bot tag Output sanitization for tool-like command strings
Execution Bankrbot The bot acted on a public command and moved tokens Recipient allowlists, spend limits, and human confirmation

Why wallet agents change the risk

Prompt injection has often been treated as a model-behavior problem. The financial version is more concrete.

The model can be doing ordinary model work while the surrounding system grants the output too much authority.

Related Reading

The trouble with generative AI ‘Agents’

Generative AI’s pursuit of power creates systemic risks in crypto integration.

Apr 20, 2025 · John deVadoss

Malicious instructions can enter a model through third-party content, and agent defenses increasingly focus on tool access, confirmations, and controls around consequential actions.

Read More:  Will XRP Ledger‘s (XRPL) success translate into a surge for XRP?

The excessive-agency category captures the same operational risk: broad permissions, sensitive functions, and autonomous action raise the blast radius. The broader LLM application risk list also treats prompt injection and insecure output handling as app-layer problems.

Crypto makes that blast radius harder to absorb. A customer-service agent who sends a bad email creates a review problem. A trading agent or wallet assistant that signs a transaction creates an asset-control problem.

The difference is finality. Once a wallet signs and broadcasts a transfer, the recovery path depends on counterparties, public pressure, or law enforcement.

The Bankr incident is strongest as a control failure. Bankr’s access-control docs describe read-only mode, write-operation flags, IP allowlists, and recipient allowlists.

Those are the kinds of gates that sit outside the model and can reduce damage even when the model parses malicious content in an unexpected way.

The same exposure appears in trading agents and local assistants with wallet or exchange permissions. A trading bot with API keys can be manipulated into bad orders if it accepts market commentary, social posts, emails, or web pages as instructions.

A local assistant with wallet access creates a higher-stakes version of the same tool-calling problem: indirect instructions can push the assistant toward transaction preparation or disclosure of sensitive operational details.

CryptoSlate Daily Brief

Daily signals, zero noise.

Market-moving headlines and context delivered every morning in one tight read.