arXiv · cs.LG · 2603.0556511 min read

When AI levels the playing field, and then tilts it the other way.

Skill homogenization compresses what labor can charge for. Asset concentration does the opposite. Between those two forces there are exactly two stable regimes, and we live in one of them, and the regime you land in depends on who owns the compute.
density of workers at each skill level →skill level (low → high) →capital takes it all →before AI · wide skill dispersionafter AI · homogenised around competence
Fig. 1 — Two distributions. Two very different Gini coefficients. Same economy.

There is a story the optimists tell about AI that I want to believe. It goes like this: a model you can talk to in plain English closes the gap between the median worker and the expert. The bookkeeper becomes a controller. The junior lawyer becomes a senior-ish associate. Skill differentials compress, competence spreads, and the labor market gets kinder.

This paper is Shuchen's and my attempt to be honest about why that story, although not wrong, is not the story.

The two regimes

The argument has two moving parts, and they point in opposite directions.

Skill homogenization is real and deflationary. When a model can turn anyone into a passable whatever-you-need, the wage premium a senior whatever could charge last year shrinks. We can write this as a compression of the labor-return distribution. It looks, on a histogram, like the old wide bell crunching into a tall narrow one. That is the part the optimists see.

Asset concentration is also real and the other direction. The compute that runs those models, the datasets that feed them, the infrastructure that serves them — these are not skills; they are assets. They accrue to whoever already owns them, at a rate set by how productive the AI layer gets. That is the part the optimists don't like to stare at.

The question is not whether AI equalises people. The question is whether the equalisation of people is outpaced by the concentration of things. If yes, inequality rises despite labor compressing. If no, inequality falls.
Interactive · try both regimes
share of income →population percentile →equalityGini = 0.40
Drag the two sliders. The Lorenz curve — and the Gini with it — will move. Which regime wins depends on whose effect you make larger.
Lorenz curve model · G ≈ α − 0.45s · a crude but directionally correct sketch

A small theorem

We formalise both channels in a simple model of a two-sided economy — labor on one side, assets on the other, with AI as a productivity parameter that affects both. The model has two fixed points. In one, labor has bargaining power and the skill-compression channel dominates. In the other, asset returns swamp labor returns and capital-holders capture the surplus.

The result we are most proud of is an analytic condition for which fixed point wins. It depends on three things: the pre-AI skill dispersion, the elasticity of the AI-complement assets, and the speed at which professions retool. When retooling is fast and assets are substitutable, the pleasant regime holds. When retooling is slow and assets are scarce, it doesn't.

s ≈ 0.6
Labor compression
α ≈ 0.75
Asset concentration
2
Stable regimes

What the data says, carefully

We fit the model to occupation-level wage data from 2018–2025, and the patterns are legible but not dramatic: white-collar occupations with high AI exposure show compression in the 70-to-90th percentile; occupations with low exposure show the old dispersion; and across the whole economy, the asset side has already moved much faster than the labor side. We are not claiming a headline number. We are claiming a direction, and the direction is not the optimistic one.

GINI CHANGE BY SECTOR · 2018 vs 2025Legal services+0.032Finance (front-office)+0.061Software eng. (IC)−0.028Graphic design−0.043Compute-holding firms+0.108← compression │ concentration →
Fig. 2 — Labor-side sectors compress. Asset-side sectors concentrate. Net direction depends on aggregation weights.

What I want this paper to not become

Two unhelpful readings I want to head off.

"So AI is bad." We don't argue that. We argue that the question "does AI reduce inequality?" is underspecified in a way that gives you the wrong answer in both directions. The right question is always a two-channel question, with regime-dependent sign.

"So we need to tax compute." Maybe. That is one of several policies that moves the model toward the pleasant regime. Others include faster retraining credits, portable professional identities, and the thing nobody wants to say out loud — a slower deployment cadence so the labor side has time to move its feet. The paper gives tools, not recommendations. Recommendations are a different kind of writing.

The best use of the paper, in my private and possibly grandiose hope, is as a conversation-starter you can hand a policymaker who insists on a single-channel story. "Here is the other channel. Model both. Pick regime."

Personal note

I spent most of 2025 reading macroeconomics. I am not a macroeconomist. I found it strange how politely and patiently that field has been waiting, for a very long time, for someone to ask about a shock exactly the size and shape of this one. The model we wrote is small and careful; the question it asks is neither small nor careful, and I don't know how this story ends. What I can tell you is that the regime is bistable, and the thing that picks the regime is not the technology. It is the institutions around the technology. That is, on net, a hopeful thing to have proven.

← all fun papers Next: 06 Distributed Cortical →