arXiv · q-fin.CP · 2604.0327210 min read

AI & systemic risk, when every trader quietly has the same brain.

Performative prediction, algorithmic herding, cognitive dependency — three channels that compound into a convex coupling. Adoption goes up; so does fragility, superlinearly.
Systemic risk multiplier M(φ)AI adoption φ →monoculture regimeφ* · saddle-noder(φ) = φ·ρ·β / λ′(φ)convex in φ → M = (1−r)⁻¹ blows up
Fig. 1 — The multiplier is innocent at low adoption. It is not innocent anywhere else.

There is a story quants tell each other that goes like this: a hedge fund has a great model, the model makes money, the fund gets bigger, the model's own trades start moving the market the model was trying to predict, and one afternoon in August the whole thing unwinds in ninety minutes. Everyone nods, everyone laughs nervously, and then everyone goes back to fitting a bigger model.

This paper is about what happens when all the funds are fitting the same bigger model.

The three channels

Shuchen and I started from a simple suspicion: "AI in finance" is not one risk. It is at least three risks that happen to arrive wearing the same jacket. Formalizing them as separate channels was half the work.

Performative prediction. When enough capital acts on a forecast, the forecast becomes a cause. Predictions that move prices feed back into the very data the next prediction is built on. Call the intensity of that feedback β.

Algorithmic herding. If two firms train on the same web and the same textbooks, their trading signals correlate. Call that correlation ρ. This is not a bug — it is the cost of everyone being smart in the same way.

Cognitive dependency. Once a trader outsources judgment to a model, the capacity to override the model atrophies. Dependency is a state variable, not a parameter. It cannot be undone with a memo.

The punchline of the paper is a single formula: r(φ) = φρβ / λ′(φ). Every piece of that fraction gets worse as φ — the AI adoption share — gets larger.

Why the coupling is convex

Market depth, captured by λ′(φ), is the thing that absorbs correlated trades without moving the price too much. In a world with diverse strategies, depth is large. As AI adoption rises, strategies correlate, and the pool of counterparties willing to take the other side shrinks. So λ′(φ) decreases in φ — and because it sits in the denominator, the coupling r(φ) is convex.

Which means the multiplier M = (1 − r)⁻¹ isn't linear in adoption. It bends. And then it snaps.

Interactive · drag adoption, watch the multiplier go non-linear
M(φ) · systemic-risk multiplier →AI adoption φ →M = 1.00r = 0.00
Move φ to the right. Before roughly φ=0.6 the multiplier is polite. After that it isn't. Also try dropping ρ or β — diversity-preserving regulation shifts the cliff.
M(φ) = 1/(1 − φ·ρ·β/λ'(φ)) · λ'(φ) = 1 − 0.7φ · clipped at a ceiling for display
01 · PERFORMATIVEprice ↔predictionfeedback intensity β02 · HERDINGsignal correlation ρ03 · DEPENDENCYstate variable · hysteretic
Fig. 2 — The three channels, drawn small so they look friendly.

An impossibility theorem, briefly

The cleanest result in the paper — and the one I had the most fun proving — is a negative one. In a static equilibrium framework, you cannot capture the hysteresis of cognitive dependency. If people can't un-depend on a tool the way they can un-buy a stock, your equations need a time axis. Static models will always underpredict fragility, because the thing that makes fragility persistent is not a parameter, it's a history.

It's a narrow mathematical result that I think has a wide policy consequence: any regulator whose stress test is an equilibrium object is looking at a snapshot of a photograph of a fire.

What we don't show

We don't show that AI in markets is net-bad. We don't show a specific adoption share above which things break. We don't show — and this is the hardest omission — what diversity-preserving regulation should look like. Our contribution is the thing before the policy: a model clean enough that someone can argue with us about numbers, which is the part of the conversation I'm most looking forward to.

There is also a version of this paper that is about AI outside finance. I haven't written it yet. But when I watch my Slack autocomplete and my coworker's Slack autocomplete finish each other's sentences, I think about it a lot.

The policy thought that keeps surfacing

If diversity is the fix — if the whole structural problem is that everyone's models correlate because everyone's models are downstream of the same training data — then the simplest policy lever is diversity-preserving regulation, which I mean in a boring, practical sense rather than a grand one. Things like: a licensing regime that rewards firms for demonstrably uncorrelated approaches; stress tests that penalise synchronous trading patterns even when individual firms are solvent; reserve requirements that scale not just with balance-sheet size but with signal overlap with peers.

None of these are radical. All of them are harder to administer than the current rules because they require measuring a quantity — model similarity across firms — that firms have every incentive to obscure. That is a technical research agenda I'd like to see someone take up, and one the paper's appendix quietly gestures toward.

A small self-critique

The impossibility theorem in section 4 is the cleanest mathematical content, but it is also, candidly, the part of the paper that will age the worst if someone builds a time-dynamic equilibrium framework that captures hysteresis. I expect this to happen within two years. When it does, the theorem becomes a statement about static equilibrium specifically, rather than a blanket negative result, and the correct citation format is "Chen & Meng showed this for the static case, which was the relevant case at the time." Which is fine. I think that is the honest place for a negative result to end up.

The paper's affirmative contribution — the convex coupling, the superlinear multiplier, the three-channel decomposition — will age better. That is the part I want people to argue with.

← all fun papers Next: 03 R-LLaVA →