Suppose — just for the length of this essay — that the cost of a useful thought continues to fall at roughly the rate it has been falling. Suppose the price of an expert-quality paragraph, today around a tenth of a cent, reaches a tenth of a tenth of a cent by the time I finish my PhD, and a tenth of that by the time whatever I write next is read. That is not a prediction. It is a thought experiment. But it is close enough to a prediction that I find it worth taking seriously.
In that world, intelligence is abundant. What does that do to the people whose job used to be scarce intelligence?
What actually gets cheap
It is easy to conflate two things. One is the cost of generating a paragraph, a diagnosis, a circuit sketch. The other is the cost of being confident that the paragraph is correct. These fall at very different rates. Generation is nearly free already; confidence is expensive and stays expensive, because confidence is the integral over verification, and verification is bounded by the physics of the world the claim lives in.
So in abundance, the scarce good is not thought. It is trustworthy thought. The professions that do best are the ones that were secretly verification work pretending to be generation work — medicine, law, research — and the professions that do worst are the ones that were secretly generation work pretending to be verification work. I will let you decide which is which. My own guesses are spicier than I want to put in print.
Abundance doesn't eliminate expertise. It relocates it, from the act of producing to the act of deciding what to believe.
Three shapes a post-scarcity career could take
The craftsperson. Some domains will re-artisanalize. When anyone can generate a passable X, the market for hand-made X, signed and attributable, grows. This is what happened to bread, furniture, and typography. It can happen to papers, lectures, and code.
The orchestrator. Other domains will reward the people who run large fleets of cheap thinkers well. The work is less "have the idea" and more "design the tournament of ideas, prune it, assemble the winners." This is an editorial skill, not a generative one, and it is undervalued precisely because the people who are good at it look, from outside, like they aren't doing anything.
The witness. And a surprising number of roles will be about being present in a way a model cannot be — at a bedside, in a courtroom, across a table during a deal. "Witness" is not quite the right word, but the right word hasn't been coined yet. These are jobs where the point is not the output; the point is that a human was the one who did it. I think this is a bigger category than people expect.
The risk I actually lose sleep over
It isn't unemployment. Unemployment is a real risk, but it is also a tractable one — we've navigated technological displacement before, imperfectly, at scale, and we know roughly which levers matter. What I lose sleep over is dependency drift.
When a capability becomes externally abundant, the internal version quietly atrophies. I was better at mental arithmetic in 2004 than I am now. I was better at reading maps in 2010. These are small losses and I am not mourning them. But the logic generalizes, and the thing that generalizes least comfortably is judgment. If the model drafts the email, the letter, the analysis, the differential, the brief — at what point does the human who signs it no longer have the substrate to catch the model being wrong?
This is the same concern I raised in the systemic-risk paper, and the reason I raised it there. It shows up everywhere, not just in markets.
What I want the next five years to look like
A short list, stated plainly, so future me can hold past me to it:
- We build evaluations that measure verification quality, not generation quality. "Is this claim supported?" is a harder benchmark than "is this fluent?" and it is the benchmark that will matter.
- We treat dependency drift as a first-class design constraint in any tool deployed to professionals. A tool that makes you worse over time is a defect, not a feature.
- We stop describing AI deployment in aggregate. "AI will do X% of tasks" is almost always a less useful question than "AI will change the shape of the profession that used to do those tasks." The shape question is what people actually live inside.
- We preserve weirdness. A homogenous research community is a fragile one, and the cheapest intelligence is also the most correlated intelligence. The counterweight is the oldest one: people who think in public, strangely, about things they care about. This website is, in its small way, an attempt at that.
I don't know how any of this resolves. I do know that the essays I find most useful, looking back from ten years into any technology, are the ones that were specific, honest about their uncertainty, and willing to be wrong on the record. So here this one is. I will be surprised by most of what happens. I will try to be surprised in public.