The Bank of England is preparing to simulate how swarms of AI-powered trading agents could amplify market shocks through “herding” behaviour, amid growing concerns that advanced artificial intelligence could destabilise the financial system.
In a response published Thursday (16 April) by Parliament’s Treasury Committee, the central bank confirmed it is conducting scenario analysis and working with international counterparts on simulations to understand the risks posed by AI agents in trading markets.
The focus is on “correlated behaviour”, often called herding, where multiple AI systems, trained on similar data or pursuing comparable strategies, might react in lockstep during periods of stress, exacerbating sell-offs and liquidity crunches.
Deputy Governor for Financial Stability Sarah Breeden detailed the work in a letter to the committee, noting that the Bank is examining how AI adoption is reshaping the financial system more broadly, including through investment and deployment across firms.
This marks a proactive step following criticism from lawmakers. In a January 2026 report, the Treasury Committee warned that regulators’ “wait-and-see” approach to AI in finance risked “serious harm” to consumers and the wider system.
MPs highlighted the potential for AI-driven trading to amplify herd behaviour, potentially triggering or worsening a financial crisis in a worst-case scenario.
Chair of the Treasury Committee Dame Meg Hillier welcomed the Bank’s move but expressed frustration with the pace of broader government action, particularly on bringing major AI and cloud providers under the Critical Third Parties (CTP) regime.
“Recent developments in the world of AI, such as Anthropic’s Project Mythos, show us how fast this transformative technology is moving,” Hillier said. “It has never been more important that those responsible for maintaining the UK’s financial stability take a proactive approach to understanding and mitigating the risks AI may pose to our financial system.”
She added that the Treasury appeared to show “inertia” on operational resilience risks from concentrated reliance on a handful of tech providers.
From algorithmic trading to agentic AI
AI is already embedded in UK finance. The Bank of England and Financial Conduct Authority’s surveys show widespread use for fraud detection, customer service, and risk modeling, with growing interest in more advanced applications like credit assessment and trading.
While current algorithmic trading is largely rules-based, the next wave involves “agentic” or autonomous AI systems capable of making independent decisions, adapting in real time, and potentially interacting with each other in unpredictable ways.
The Bank’s Financial Policy Committee (FPC) has flagged greater use of AI in financial markets as one of four key areas of focus for potential systemic risks.
In its April 2025 analysis, it noted that while AI could boost market efficiency, correlated strategies might lead firms to unwind positions simultaneously during stress, amplifying shocks in core markets like bonds that underpin funding for the real economy.
The Bank is also running its fourth biennial survey of AI adoption this year and, together with the FCA, operates an AI Consortium for public-private collaboration on risks such as concentration in third-party models, explainability issues, and “AI accelerated contagion” in markets.
The FCA, for its part, has committed to sharing best-practice examples with firms to help align AI use with existing rules on consumer protection and governance.
Echoes of past crises
The concerns evoke memories of the 2010 Flash Crash, when high-frequency trading algorithms triggered a brief but dramatic plunge in US equities. Today’s worries centre on more sophisticated systems that could learn from each other or optimise in ways that create feedback loops invisible to human overseers.
Experts have pointed to risks including model drift (where AI performance degrades over time), bias in decision-making, and heightened cyber vulnerabilities, including the potential for advanced AI to discover novel exploits, as hinted by recent developments like Anthropic’s work.
Reliance on a small number of US-based AI and cloud providers also raises operational resilience questions. An outage or coordinated failure at a critical third party could cascade across the sector.
The Bank has stressed that existing regulatory frameworks, including model risk management, governance, and operational resilience rules, provide a foundation, but it is actively assessing whether they suffice as adoption accelerates.
Evidence so far suggests advanced generative or agentic AI has not yet reached levels posing immediate systemic threats, but risks could “increase, potentially rapidly.”

Leave a Reply