There is a scene that plays out, with minor variations, in the offices of economics departments at every major research university in America. A job candidate sits across from an interviewer — someone senior, someone who came up through the profession when price theory was the shared language of the discipline. The interviewer poses a question. Not a hard question, really. Something you might find in an intermediate microeconomics textbook, dressed up slightly for the occasion. How would a minimum wage affect employment in a monopsony labor market? Walk me through the welfare effects of a tariff when domestic supply is perfectly elastic. What happens to the price of housing permits when you restrict their transferability?
The candidate stares. The candidate has published in the American Economic Review. The candidate has a job market paper running to ninety-three pages with six appendices and a twelve-page online supplement. The candidate can code in R, Python, and Julia, can run a regression discontinuity design in their sleep, and has applied machine learning methods to predict municipal bond defaults with startling accuracy. But the candidate cannot, in the space of a conversation, reason through a supply-and-demand problem using the basic tools of price theory. [C1]
Tyler Cowen says that when he interviews candidates from top-ten programs, "well over half" cannot do this. More than half. These are the best young economists in the world, the products of the most rigorous graduate training programs ever devised, and they cannot perform the kind of economic reasoning that Milton Friedman considered the baseline of professional competence. Something has happened to economics, and this chapter is about what that something is.
To understand the retreat, you first have to understand what is retreating. Price theory is not a body of doctrine so much as a habit of mind — a conviction that the basic concepts of microeconomics, the ones you learn in your second year of undergraduate study, are not merely pedagogical scaffolding to be discarded once you reach the heights of professional practice, but are themselves the most powerful tools in the economist's kit. Supply and demand. Marginal cost. Opportunity cost. Consumer and producer surplus. Elasticity. The equimarginal principle. In the price theory tradition, these concepts are not simplifications of some deeper, more mathematical truth. They are the truth, and the economist's job is to learn to apply them with ever greater subtlety and precision to an ever wider range of human problems.
The capital of this tradition was, for decades, the University of Chicago. The lineage runs from Frank Knight through Milton Friedman and George Stigler to Gary Becker and on to Kevin Murphy, who for years ran the legendary Price Theory Summer Camp — an intensive course that trained a generation of economists to think in this particular way. The camp was famous. It was the place where you learned not just to use models but to think like a Chicago economist, to approach any problem by asking: What are the constraints? What is being maximized? What is the relevant margin? And what happens when conditions change at that margin?
But here is the thing about rearguard actions. They are, by definition, fought during a retreat. The very existence of Kevin Murphy's summer camp — the fact that it needed to exist, that price theory had to be taught as a special intensive supplement rather than absorbed through the normal course of graduate training — was itself evidence that the tradition was in trouble. You do not need a summer camp to teach people something that the regular curriculum already provides. [C2]
Steve Levitt understood this. Levitt, who became the most famous economist in America through Freakonomics, who built his career applying clever identification strategies to quirky questions in the Chicago tradition, retired from academia at fifty-seven. His valedictory assessment was blunt: "I think in the marketplace for ideas, I gotta say that the Chicago price theory really has lost." This is not some outsider's critique. This is one of Chicago's own, a man who spent his entire career at the university, looking at the landscape and acknowledging what he saw. And Murphy himself stepped down from Chicago. The generals are leaving the field.
Friedman saw this coming. As early as the 1990s, he worried that economics was losing its center of gravity, that the profession was drifting away from the kind of intuitive economic reasoning that he considered its greatest strength. Friedman was not opposed to mathematics — he was a superb mathematician when he chose to be — but he believed that the purpose of mathematical formalism was to discipline intuition, not to replace it. A proof that you could not explain in plain economic language was, in Friedman's view, a proof you did not fully understand. [C3]
The Rise of Empiricism
How data and econometrics displaced pure economic theory.
What replaced price theory? The short answer is: data.
The transformation of economics from a primarily theoretical to a primarily empirical discipline is one of the most dramatic shifts in the history of any academic field. The numbers tell the story with the clarity that economists themselves would appreciate. The portion of theoretical papers published in the American Economic Review, the Journal of Political Economy, and the Quarterly Journal of Economics — the three most prestigious journals in the profession — fell by 31.6 percentage points between 1963 and 2011. Thirty-one point six percentage points. In 1963, theory dominated these journals. By 2011, it was a minority practice, and the trend has only accelerated since.
The typical economics paper today looks nothing like the typical economics paper of 1975. It is vastly longer — ninety pages, sometimes more, with multiple appendices containing robustness checks, alternative specifications, placebo tests, and enough supplementary material to constitute a small monograph. The seventeen-to-twenty-five-page paper, the kind that Friedman or Stigler or Samuelson would have written — a crisp theoretical argument, a few illustrative examples, perhaps some back-of-the-envelope empirical work — is essentially dead. It would not survive peer review at a top journal. Referees would send it back with demands for more data, more identification, more robustness, more appendices. [C4]
And the range of topics has exploded. No one demands that economics papers restrict themselves to what previous generations would have recognized as "economics." Economists now publish on public health, education policy, gender gaps, racial disparities, church attendance, the psychology of decision-making, the effects of air pollution on test scores, the impact of lead exposure on crime rates. The National Institutes of Health has become a major funder of economics research — a sentence that would have been incomprehensible to an economist of the 1960s. The boundaries of the discipline have become so porous that the very question "What is economics?" has become difficult to answer.
Tyler's answer is characteristically direct: "What distinguishes economics as a field, right now, is a mix of higher standards, harder work, better math, and higher IQs." Not a subject matter. Not a theoretical framework. Not a set of shared assumptions about how the world works. Economics is distinguished by who its practitioners are and how hard they work, not by what they study or how they think about it. This is a remarkable claim. It amounts to saying that economics has become a method in search of a subject rather than a subject in search of a method — precisely the opposite of what the marginalists of the 1870s intended when they tried to establish economics as the science of value and exchange.
The Challenge to Intuition
Why formal proof is replacing economic intuition.
But the decline of price theory is not merely a sociological phenomenon — a matter of shifting fashions and departmental politics. There is an intellectual argument behind it, and it is a powerful one.
Shengwu Li, who teaches at Harvard, has offered perhaps the most cutting formulation. When an economist says, "Can you give me an economic intuition for that result?" what they typically mean, Li observes, is: "Can you explain that result using mathematics that was introduced to economics more than twenty years ago?" [C5] This is devastating. It reframes the entire appeal to "intuition" as a form of conservatism, a preference for the familiar dressed up as a preference for the fundamental. When you say you want an intuitive explanation, you are not asking for something deeper than the proof. You are asking for something older than the proof — something that maps onto mathematical tools you already understand. The freshman who finds calculus "unintuitive" is not identifying a flaw in calculus. She is identifying a gap in her own training.
Ali Shourideh goes further, channeling the great mechanism designer Leo Hurwicz: "Intuition, Shmintuition, look at the ***ing proof!!!" And Christopher Phelan delivers the logical coup de grâce: "If the intuition were correct, it would BE the proof." This is the argument in its purest form. If your informal, verbal, "intuitive" reasoning actually captured the truth of the matter, then it would be possible to formalize it into a rigorous proof. If it cannot be so formalized, then it is not genuine intuition — it is hand-waving that happens to feel persuasive. And the feeling of persuasiveness, as centuries of intellectual history have demonstrated, is one of the worst possible guides to truth. [C6]
Consider Topkis's theorem. Tyler admits, with characteristic honesty, that he did not even know what Topkis's theorem was — and he is Tyler Cowen. The theorem, published by Donald Topkis in 1978, concerns supermodular functions and lattice theory. It provides conditions under which the set of optimal solutions to an optimization problem is monotonically increasing in a parameter. In plain English: it tells you when "more of one thing" leads to "more of another thing" in an optimization context. This is, when you think about it, exactly the kind of question that price theory tries to answer through verbal reasoning about margins and incentives. But Topkis's theorem does it rigorously, generally, and in contexts where verbal intuition breaks down completely — where the functions are not differentiable, where the choice sets are not convex, where the standard machinery of Lagrangians and first-order conditions simply does not apply.
The new generation of economic theorists works with tools like these — tools drawn from topology, measure theory, functional analysis, and combinatorics that have no verbal equivalent in the price theory tradition. You cannot explain Topkis's theorem using supply-and-demand diagrams. You cannot reach it through the equimarginal principle. The mathematics is not a translation of an underlying economic intuition; the mathematics is the insight, and there is no other form in which the insight can be expressed. This is what Phelan means. If the intuition could substitute for the proof, we would not need the proof. We need the proof because the intuition is insufficient.
Long-Term Capital Management: Theory Goes Wrong
How elegant models fail when they meet an unpredictable world.
There is a parable that illuminates this tension, and it comes from finance — the most mathematized corner of economics, the place where the marriage of theory and practice has been most intimate and most consequential.
In 1973, Fischer Black and Myron Scholes published a paper that would reshape the financial world. Their options pricing formula — the Black-Scholes model — provided, for the first time, a theoretically rigorous method for determining the fair price of a financial option. The timing was exquisite: the Chicago Board Options Exchange opened that same year, creating an organized market for the very instruments that Black and Scholes had learned to price. [C7]
The formula was beautiful. It was, in the language we have been using, deeply intuitive — at least to those trained to read it. It expressed the price of an option as a function of just five variables: the current stock price, the strike price, the time to expiration, the risk-free interest rate, and the volatility of the underlying stock. No estimate of expected returns was needed. No forecast of future stock prices. The formula was derived from a single, elegant insight: that by continuously adjusting a portfolio of the stock and a risk-free bond, you could perfectly replicate the payoff of any option. Since the replicating portfolio and the option must, by the no-arbitrage principle, have the same price, the option's value was determined.
Traders carried printouts of the formula onto the trading floor. Texas Instruments sold calculators with the Black-Scholes formula pre-programmed. The model did not merely describe the options market — it created the market, providing the shared language and the shared valuation framework that made large-scale options trading possible. In the years after its publication, options markets exploded in size and sophistication. Scholes won the Nobel Prize in 1997 (Black had died in 1995 and was thus ineligible, though the committee took the unusual step of acknowledging his contribution).
Here was price theory's finest hour — or so it seemed. An elegant theoretical insight, grounded in basic economic reasoning about arbitrage and replication, had been translated into a practical tool of enormous power. Friedman himself might have approved: the theory was intuitive, it was useful, and it worked.
But then came Long-Term Capital Management.
LTCM was founded in 1994 by John Meriwether, a legendary bond trader from Salomon Brothers. Its board of directors included Myron Scholes himself and Robert Merton, who had independently derived a version of the Black-Scholes formula and shared the 1997 Nobel Prize. The firm's strategy was, in essence, to apply the insights of modern financial theory — the same insights that had produced the Black-Scholes formula — to identify and exploit small pricing discrepancies in global bond markets. The models were sophisticated. The people were brilliant. The returns, for a while, were spectacular. [C8]
In August 1998, Russia defaulted on its government debt. This was not, in itself, a catastrophic event for LTCM — the firm had relatively little direct exposure to Russian bonds. But the default triggered a global flight to quality, as investors around the world simultaneously rushed to sell risky assets and buy safe ones. Correlations that had been low in normal times — correlations that LTCM's models assumed would remain low — suddenly spiked toward one. Every position in the fund moved against them at once. The diversification that was supposed to protect the portfolio evaporated precisely when it was needed most.
Within weeks, LTCM had lost most of its capital. Because the fund was enormously leveraged — it had borrowed roughly thirty dollars for every dollar of equity — the losses threatened not just the fund but its counterparties, and through them, the entire global financial system. The Federal Reserve Bank of New York organized a bailout, persuading fourteen major banks to inject $3.6 billion into the fund to prevent a disorderly collapse.
The most elegant economic intuition of the twentieth century, taken to its logical extreme by some of the most brilliant economists who ever lived, had nearly broken the world.
LTCM and AlphaFold are, in a sense, opposite failures of the same relationship between theory and reality. LTCM's Nobel laureates had a beautiful theory that worked — until the world produced a scenario the theory had not imagined. AlphaFold's neural network had no theory at all — just patterns extracted from data — and it worked better than fifty years of biophysical theorizing. LTCM is what happens when you mistake your model for the world. AlphaFold is what happens when the world reveals it never needed your model. For economics, the implications run in both directions. The profession must guard against LTCM-style overconfidence in its theories. But it must also reckon with the possibility that AlphaFold-style prediction without understanding might, for many practical purposes, be enough.
The LTCM debacle is usually told as a story about hubris, and it is that. But it is also, more precisely, a story about the limits of intuition — about what happens when a model that captures something true and important about the world is mistaken for a complete description of the world. The Black-Scholes formula is correct in the way that Newtonian mechanics is correct: it provides an extraordinarily useful approximation that works beautifully under a wide range of conditions and fails catastrophically under conditions that lie outside its domain. The price theorist's error is not in using the model but in forgetting that it is a model — in confusing the map with the territory.
And this is precisely the danger that the anti-intuition camp identifies in price theory more broadly. When Kevin Murphy works through a problem at the blackboard, deploying supply-and-demand reasoning with breathtaking facility, he is doing something genuinely impressive. But he is also, inevitably, working within the limits of a particular set of mathematical tools — tools that assume differentiability, convexity, and other properties that the world does not always exhibit. When the tools work, the reasoning feels intuitive. When they do not, the reasoning feels wrong — but the wrongness is not always apparent, because the intuition has become so familiar that it is mistaken for direct perception of economic truth.
Topkis's theorem works precisely in the spaces where traditional intuition fails. The new empiricism works precisely in the spaces where traditional theory provides no guidance. The retreat of intuition is not a retreat from rigor. It is an advance toward a different kind of rigor — one that trusts data over theory, proof over persuasion, and formal results over the feeling of understanding.
But who, really, has won? Tyler's answer is unexpected: William Whewell.
Whewell (1794-1866) was a Victorian polymath of almost absurd range — the man who coined the words "scientist," "physicist," and "consilience," who wrote major works on the philosophy of science, the history of the inductive sciences, architecture, moral philosophy, and the tides. He was also, crucially, a critic of Ricardian economics — of the tradition, descending from David Ricardo, that attempted to derive economic truths from a small number of abstract principles through purely deductive reasoning. Whewell and his circle — which included figures like Richard Jones and Thomas Robert Malthus — believed that economics should be an empirical science, grounded in careful observation of actual economic institutions and practices, not a deductive science modeled on Euclidean geometry.
This is, Tyler points out, a very contemporary attitude. The modern empirical economist, with her natural experiments and her instrumental variables and her ninety-page papers full of robustness checks, is doing exactly what Whewell advocated: building economic knowledge inductively, from careful observation, rather than deductively, from first principles. "We economists have been living in William Whewell's world for quite some while," Tyler writes. "We just have not known it." [C9]
But this recognition carries an uncomfortable implication. If Whewell was right about method — if the way to build economic knowledge really is inductively from careful observation rather than deductively from first principles — then the marginalist revolution may have been right about specific insights while being wrong about how those insights should be established. Diminishing marginal utility, opportunity cost, equilibrium — these ideas may be true not because they were derived from elegant axioms but because they happen to match patterns that careful empirical observation would eventually have revealed anyway. The marginalists were not wrong. But they may have been right for the wrong reasons, and in a discipline that prizes methodology above all, that distinction matters more than it might seem.
The irony is rich. The marginalist revolution of the 1870s — the revolution that this book traces from its origins through its triumph and now into its partial eclipse — was, in many ways, a reassertion of the Ricardian deductive method against the empiricist critique. Jevons, Menger, and Walras all believed they had discovered fundamental principles — the principle of diminishing marginal utility, above all — from which the laws of economics could be derived. They were theorists, system-builders, architects of grand analytical frameworks. And for more than a century, their intellectual descendants dominated the profession.
But now the empiricists are ascendant. Not the crude empiricists of Whewell's day, who could do little more than collect statistics and describe patterns, but empiricists armed with extraordinary computational power, vast datasets, and identification strategies of remarkable sophistication. The credibility revolution in economics — the insistence on causal identification, on natural experiments, on quasi-experimental methods that approximate the gold standard of randomized controlled trials — has transformed the profession more profoundly than any theoretical innovation since the marginalist revolution itself.
And in doing so, it has rendered the old theoretical frameworks — including the marginalist framework — less central to the daily practice of economics. You do not need to derive demand curves from first principles if you can estimate them directly from data. You do not need to theorize about the welfare effects of a policy if you can measure them empirically. You do not need to build a model of how firms set prices if you can observe their pricing behavior in a dataset of scanner data covering millions of transactions.
Whewell Vindicated: The Return of Empiricism
Modern economics finally embraces the inductive method.
This does not mean that theory is dead, or that intuition is worthless. It means something more subtle and more interesting: that the relationship between theory and evidence has been reversed. In the old dispensation, theory came first and evidence was used to test and calibrate theoretical predictions. In the new dispensation, evidence comes first and theory is used — when it is used at all — to interpret empirical findings and to suggest mechanisms that might explain observed patterns. The economist of 2026 does not begin with a model and ask, "What does the data say about my model?" She begins with a phenomenon and asks, "What is happening here, and can I identify the causal effect?" Theory enters later, if it enters at all, as one possible framework for making sense of the results. [C10]
This is, in a deep sense, a return to the pre-marginalist world — to the world of Whewell and Malthus and the historical school, where economic knowledge was built from the ground up rather than derived from the top down. The difference is that today's empiricists have tools of astonishing power — tools that Whewell could not have imagined — and standards of evidence that make their findings far more reliable than anything the nineteenth-century empiricists could produce.
The LTCM story contains a lesson here too. The Black-Scholes formula was theory first — a gorgeous piece of deductive reasoning that was then applied to the world. It worked brilliantly, until it didn't. The modern empiricist looks at that story and says: this is what happens when you trust your models too much. Better to look at the data. Better to measure than to derive. Better to let the world tell you what is true than to tell the world what must be true.
Whether this represents progress or loss depends, ultimately, on what you think economics is for. If economics is a body of theoretical knowledge — a set of principles about how markets work and how people make decisions — then the retreat of price theory is a genuine impoverishment, a forgetting of hard-won wisdom. If economics is a method for discovering truths about the social world — a way of answering questions, whatever those questions might be — then the empirical turn is an unambiguous advance, a vast expansion of the discipline's power and reach.
Tyler, characteristically, does not fully choose. He is a price theorist by training and temperament — a man who thinks in terms of margins and incentives, who can work through a supply-and-demand problem with the best of them. But he is also an empiricist by conviction, someone who respects data and distrusts grand theoretical systems. He sees the loss clearly, and he does not minimize it. The ability to reason through economic problems using basic price theory is a genuine intellectual skill, and its decline among young economists is a genuine intellectual loss. But he also sees the gain. The economics profession today is more rigorous, more empirically grounded, and more relevant to the real world than it has ever been. If the price of that progress is the decline of a certain kind of theoretical virtuosity, well — Tyler knows better than anyone that everything has a cost, and the relevant question is always whether the benefits exceed it.
The marginalist revolution gave economics its theoretical foundation. The empirical revolution is giving economics its empirical foundation. Whether the profession can sustain both — whether the retreat of intuition will prove to be a temporary withdrawal or a permanent surrender — is one of the great open questions in the social sciences. But for now, the data is winning. And somewhere, William Whewell is smiling.