Social media crashed a bank in a day, agents will do it before we wake up

A smartphone showing a stock market crash with a red downward chart on a dark background

In 2023, the Silicon Valley Bank (SVB) had a historic bank run, with investors withdrawing $42 billion in a single day. The Federal Reserve’s postmortem report essentially blamed two modern technologies — social media and online banking for the incident.

the combination of social media, a highly networked and concentrated depositor base, and technology may have fundamentally changed the speed of bank runs.

It was considered instantaneous in nature, well, compared to how such large-scale withdrawals happened in the past. And the correlation between social media and the bank run was so strong that an academic paper confirmed it with data:

During the SVB run period, banks with high pre-existing exposure to Twitter lost 4.3 percentage points more stock market value. ref: Social Media as a Bank Run Catalyst

But with the pervasive access to agentic AI, especially in the financial sector, which has been innovating with automated trading for years, people are rightly pointing out that the next financial disaster could happen in a fraction of the time it took for SVB to collapse.

Now imagine agents doing this

Here is an excerpt from the Washington Post’s AI & Tech brief today that captures this concern:

Agentic AI in finance “means you’re going to have bank runs in the middle of the night based on rumors on Reddit or Moltbook,” said Foster, referring to the social network built for AI agents.

For trading, now even the friction of humans socializing online and reacting to rumors (credibly or not) is removed.

Even AI agents can socialize now, which when I first heard about it, I thought was someone’s silly idea. But I kind of see how that can be useful somewhat, for personalized agentic workflows reacting to external signals. Who would not want to automatically buy tickets for a concert the moment the rumors of it being available start surfacing online?

But there can be serious consequences of such a system in the financial markets, especially when the agents are programmed by less experienced end users. Actually, it is less to do with the experience of the end users, but rather the scale such access opens up. Even if a small percentage of users have their agents set up to react to market signals, the sheer number of users and agents could lead to a cascade effect.

We recently had meme stocks like AMC and GameStop get hammered by users on online forums. I can’t imagine what would happen to companies if agents get into that game. I can imagine the authorities banning automated trading, but agentic AI can so very easily assume the identity of their users, launching regular browsers, navigating like humans, making it really difficult to police.

Crypto has no closing bell

And for assets that are traded 24/7, like cryptocurrencies, I wonder if there is even a way to put a circuit breaker in place. I mean crypto already has crazy volatility. With mass agentic AI trading, I can imagine it being a disaster.

We have already seen some of this happening recently.

On 11th October 2025, when a US tariff hike on China was announced, $19 billion worth of crypto was liquidated.

In the stock market, this would have triggered at least two trading halts and a restart auction. In the crypto world, it simply triggered panic among investors, who knew there was no “limit down” option to save them. They could only watch helplessly as crypto’s worst liquidation event unfolded while the code continued to perform its programmed task. ref: The Price of Freedom: Does Crypto Need Its Own Circuit Breakers?

And in February 2026, a vulnerability in AI trading agents on Solana DeFi led to a $45 million breach, with no opening bell to pause the damage. ref: AI Trading Agent $45M Crypto Breach

And they’re easy to fool

What makes it even worse is that these AI agents can very easily be fooled by misinformation inserted by prompt injections. A recent study from Google DeepMind identified six categories of “traps” that can exploit AI agents, and found that content injection attacks trapped AI agents 86% of the time in tested scenarios.

A fake financial report released at the right time could trigger synchronized sell orders among thousands of AI trading agents.


Cover photo by Jamie Street on Unsplash

ai
TIL Hugo can figure out image dimensions at build time How Grafana's Yesterday Preset Works