Chapter 12

Machines at the Frontier

In this chapter we discuss the factor zoo and machine-driven discovery, the talent migration reshaping economics, why prediction without explanation challenges economics differently than physics, what the post-AI economist will actually do, and whether the machines extend or end the marginal revolution.

What Claude changed in this chapter

Built from Cowen’s vision of AI’s impact on marginal economics. Claude expanded the talent migration section, added specific citations (Jumper et al., Feng et al., Manning-Zhu-Horton), wrote a counterargument on prediction-without-explanation, added two footnotes on instrumentalism vs. realism and the minimum-wage black-box problem, tightened the chess analogy by folding it into the main argument, looped the factor zoo back into the closing, and reframed the finale from resignation toward possibility.

In December 2020, a team of researchers at DeepMind, a subsidiary of Alphabet, announced that their program AlphaFold had essentially solved the protein folding problem. The problem — predicting the three-dimensional structure a protein will assume based solely on its amino acid sequence — had been considered one of the grand challenges of biology for half a century. Christian Anfinsen had won the Nobel Prize in 1972 for demonstrating that the sequence should determine the structure. But actually computing the fold from the sequence remained, for fifty years, a task of almost incomprehensible difficulty. The number of possible configurations a single protein chain might assume is astronomical — Levinthal's paradox notes that a modest protein would need longer than the age of the universe to sample all possible folds randomly. Thousands of doctoral careers were built around this problem. Entire subfields of structural biology existed to coax proteins into crystals, bombard them with X-rays, and painstakingly reconstruct their shapes one atom at a time.

Then a machine did it. Not perfectly at first, but by the time AlphaFold 2 competed in the CASP14 competition, its predictions were achieving accuracy comparable to experimental methods. The judges were, by several accounts, stunned. Some were unnerved.

Tyler Cowen does not begin his discussion of artificial intelligence with AlphaFold, but the story haunts every page of what follows. Because the protein folding revolution is not merely an analogy for what AI might do to economics — it is a demonstration, already completed, of exactly the transformation he describes. The machines did not replace the questions. They replaced the methods we used to answer them. And the new methods were, to human intuition, essentially opaque.

The Fall of Finance: When Marginalism Met Big Data

How the most data-rich field in economics abandoned theory for prediction.

To understand why Tyler sees economics as particularly vulnerable to this transformation, you have to understand which branch of economics fell first.

It was finance.

This may seem counterintuitive. Finance was, by most measures, the most advanced and most successful branch of economics. It had the best data — decades of daily stock prices, continuously updated, cleanly recorded, available to anyone willing to pay for a Bloomberg terminal. It had the highest stakes — trillions of dollars riding on whether theories were correct. And it had, arguably, the most rigorous models. The Capital Asset Pricing Model, developed independently by William Sharpe, John Lintner, and Jan Mossin in the 1960s, was explicitly and elegantly marginalist. CAPM rested on diminishing marginal utility of money — the same principle that Stanley Jevons, Carl Menger, and Léon Walras had placed at the foundation of all economic reasoning a century earlier. An investor's required return on an asset, the model said, depended on how much that asset contributed to the risk of their overall portfolio. Risk was measured by Beta — the asset's covariance with the market. Higher Beta, higher expected return. Lower Beta, lower expected return. The logic was clean, the mathematics were beautiful, and the model earned Sharpe a Nobel Prize. [C1]

Then Eugene Fama and Kenneth French published their 1992 paper, and Beta stopped working.

"The Cross-Section of Expected Stock Returns" showed that Beta — the single variable upon which the entire CAPM edifice rested — had essentially no power to predict which stocks would earn higher returns. Size and book-to-market ratio did predict returns. Beta did not. Tyler puts it bluntly: "You can date the decline of marginalism to that 1992 paper." This is a provocative claim, and perhaps too tidy, but the underlying observation is devastating. The most sophisticated, most data-rich, most incentive-aligned branch of economics had produced a flagship model built on marginalist foundations, and that model's central prediction turned out to be empirically wrong. [C2]

What replaced it was not a better theory. What replaced it was more factors. Fama and French added size and value. Then momentum was added. Then profitability. Then investment. The proliferating "factor zoo" grew and grew, each new factor improving predictive accuracy by some increment while making the overall framework less and less interpretable as any kind of coherent story about human behavior or market equilibrium.

And then the machines arrived.

Recent research in machine learning applied to finance has demonstrated that algorithms can predict stock returns better than traditional factor models — and not by a marginal improvement. The machine learning systems find patterns in data that no human had theorized, no economist had predicted, and no one could readily explain. Some models use hundreds of thousands of latent factors, reducing asset pricing errors by half or more compared to classic factor models. Tyler asks the question that should keep financial economists up at night: "What kinds of intuitions can possibly be supported by 360,000 factors?"

The answer, of course, is none. No human mind holds 360,000 intuitions about asset pricing simultaneously. The model works — it works better — but it does not work in a way that any human being can understand, narrate, or teach to a graduate student in a way that builds their judgment about markets. It is prediction without comprehension.

Look at who the top hedge funds hire now. Not economists. They hire mathematicians, computer scientists, physicists — people who are comfortable building systems they cannot fully interpret, people whose training has prepared them to respect the answer even when they cannot see why it is the answer. The economists, with their insistence on interpretable models grounded in rational choice theory, increasingly find themselves at the margins of an industry their discipline once claimed to explain. [C3]

Machines Writing Economics: From Theory to Opaque Prediction

When algorithms become both researchers and subjects, comprehension becomes optional.

But finance was only the beginning. The transformation Tyler describes is now reaching into the methodological core of economics itself.

Consider "Fully Automated Social Science," a paper by Manning, Zhu, and Horton that reads less like a research article and more like a dispatch from a future that has already arrived. The authors use large language models to play roles in economic simulations — acting as consumers, firms, workers, policymakers — generating structural models of behavior, then feeding the results back into further LLM analysis. The language model is simultaneously the subject and the scientist. It generates the data and the hypotheses and the interpretive framework. The human researchers designed the system. They did not, in any traditional sense, do the research.

This is not an isolated experiment. The evidence that LLMs are becoming capable economic tools is accumulating with the unsettling speed of a literature that senses it is documenting something irreversible. Large language models already match human forecasting crowds in prediction accuracy — and they are incomparably cheaper. They outperform ARMA-GARCH models, the workhorses of time-series econometrics, for macroeconomic forecasting. They outperform human experts at predicting neuroscience results. When Jens Ludwig and Sendhil Mullainathan explored machine learning's capacity for hypothesis generation, they found that algorithms could identify novel predictive relationships — facial features that predicted judicial sentencing decisions, for instance — that no human researcher had theorized or even imagined. [C4]

The implications are methodologically profound. Economics, since the marginal revolution of the 1870s, has operated on a particular epistemological contract: we build models based on assumptions about human behavior (utility maximization, diminishing marginal returns, equilibrium), we derive predictions from those models, and we test the predictions against data. The models are meant to be understood. They are meant to provide intuition — portable insights that an economist carries in her head and applies to new situations. When a student learns that demand curves slope downward, she acquires not just a prediction but a way of seeing the world. That way of seeing — grounded in marginalist reasoning about individual choice at the margin — has been the discipline's greatest intellectual export for a century and a half.

What happens when the predictions can be generated without the understanding?

Tyler's answer is not reassuring.

Thinking Without Understanding: The Limits of Pattern Recognition

When the best predictions come from systems that cannot explain their reasoning.

In 2024, Google DeepMind published a paper titled "Grandmaster-Level Chess Without Search." The result was, even by the standards of a field accustomed to dramatic announcements, remarkable. A transformer model, fed fifteen billion chess positions, achieved an Elo rating of 2895 — competitive with the strongest human players who have ever lived. What made this extraordinary was not the rating itself, impressive as it was. It was the method. Traditional chess engines — Stockfish, the dominant program for years — work by searching millions of positions per second, evaluating each one, pruning the tree of possibilities. They are, in a meaningful sense, thinking ahead. The DeepMind transformer did not search. It looked at the position and recognized what to do, the way a human grandmaster claims to recognize the right move without calculating variations.

Tyler invokes José Raúl Capablanca, the Cuban world champion famous for his preternatural positional sense, who once said: "I see only one move ahead, but it is always the correct one." Capablanca was being characteristically droll — the remark is a boast disguised as modesty — but he was also describing something real. The greatest human chess players do not primarily calculate. They perceive. They have internalized so many patterns, so many structural relationships, that the right move presents itself to them with the immediacy of recognition rather than the labor of computation.

The DeepMind transformer does something similar, but it has internalized fifteen billion positions rather than a lifetime's worth. And here is the critical point: it does not understand chess. It has no concept of a pin, a fork, a passed pawn, a weak square. It has no theory of the game. It simply recognizes patterns at a resolution so fine that its performance rivals the greatest strategic minds our species has produced.

Tyler draws the parallel to economics with a sentence that lands like a quiet detonation: "Contemporary economics was born from the Marginal Revolution, but right now it is hard to see even one move ahead, much less the correct one." [C5]

Prediction Without Explanation: The AlphaFold Paradigm

A machine solved a biological puzzle but cannot tell us how it did so.

Think again about AlphaFold.

In July 2022, DeepMind released predicted structures for nearly every protein known to science — roughly 200 million structures, covering almost the entire UniProt database. They gave them away for free. The dataset was made available to any researcher, anywhere, at no cost. It was, by any reasonable accounting, one of the largest single contributions to biological knowledge in history.

The reaction in the structural biology community was complex. Some researchers — those who had spent careers developing experimental techniques for structure determination — found that their life's work had been, if not exactly automated, then drastically devalued. A crystallography experiment that might have taken a graduate student two years could now be approximated in seconds. Other researchers were liberated. Freed from the drudgery of structure determination, they could ask bigger questions — questions about protein dynamics, about interactions, about the relationship between structure and function in contexts too complex for previous methods.

But here is what almost no one in the community could do: explain why AlphaFold's predictions were correct. The neural network had learned something about the relationship between amino acid sequences and three-dimensional folds. What it had learned was encoded in millions of parameters, distributed across layers of abstraction that no human mind could decompose into interpretable principles. The fifty years of biophysical theory that had informed earlier, less successful approaches to the problem — theories about hydrophobic collapse, hydrogen bonding, van der Waals forces, entropy — were presumably "in there" somewhere, baked into patterns the network had extracted from training data. But they were not accessible as theories. They were not sayable. [C6]

This is the situation Tyler sees approaching for economics. The marginalist framework — utility maximization, equilibrium, comparative statics, all of it — will not be refuted. It will be absorbed. It will be baked into the inputs we feed our tools of AI, but eventually invisible, "like a sacred book that inspired later religions, but is no longer read or debated." The holy texts of Jevons and Walras and Marshall will not be burned. They will simply stop being opened.

What Economists Will Actually Do: Judgment Over Technique

Understanding may matter less for making predictions, but it will matter for deciding which predictions to trust.

A paper called "The AI Scientist" proposes fully automated open-ended scientific discovery — systems that formulate hypotheses, design experiments, run them, analyze results, and write up findings, all without human intervention. Jensen Huang, the CEO of Nvidia, has declared that "biology has the opportunity to be engineering, not science" — a statement that, depending on your temperament, sounds either thrilling or chilling. Tyler, characteristically, seems to find it both. [C7]

The vision these developments collectively suggest is one in which the practice of economics — the daily work of building models, testing hypotheses, generating predictions — is increasingly performed by systems whose outputs are more accurate than ours but whose reasoning is inaccessible to us. We will be able to ask an AI system what will happen to employment if the minimum wage increases by two dollars, and the system will give us an answer, and the answer will very likely be better than anything a team of labor economists could produce using traditional methods. But we will not be able to ask why, at least not in the way economists have traditionally meant that question — not in the sense of "what is the causal mechanism, expressed in terms I can understand and teach and argue about?"

This represents a genuine epistemological crisis, and Tyler does not pretend otherwise. But he is also too honest to pretend that the old regime was as solid as its defenders claim. "Maybe our intuitions about the world were never so strong in the first place," he writes. This is the most unsettling suggestion in the book — not that AI will replace our understanding, but that the understanding we are losing was always thinner than we believed.

Think about the claims that marginalism made. Demand curves slope downward because of diminishing marginal utility. Firms hire workers until the marginal product equals the wage. Markets clear because prices adjust. These claims feel intuitive, and their intuitiveness has been treated as a virtue — evidence that the theory captures something true about the world. But Tyler suggests that we may have had the causation backwards. We did not find these results intuitive because they were true. We found them intuitive because our minds are drawn to simple, narratable explanations, and we mistook the comfort of comprehension for the confirmation of correctness. We valued intuitive results, he suggests, "as a kind of cope and security blanket." [C8]

This is a hard accusation for any discipline to absorb, and Tyler is himself an economist, so the accusation falls on his own head too. But there is a liberating element in the admission. If our intuitions were always limited — "just a small corner of understanding, swimming in a larger froth of epistemic chaos" — then losing them to machines is less tragic than it might otherwise seem. You cannot mourn the loss of something you never fully had.

This is not the same as saying marginalism was false. Some of its core claims — that people respond to incentives, that marginal changes matter, that prices coordinate information — appear to be true in a way that even opaque algorithms confirm. But the question Tyler raises is whether these claims were true because they captured something deep about human nature and market dynamics, or whether they were true despite being rooted in our need for simple, narratable explanations. If the machines can predict economic behavior accurately without ever learning a supply-and-demand curve, what does that tell us about what marginalism was really doing? Was it capturing reality, or was it a remarkably effective cognitive technology — a set of stories humans told themselves to navigate a world too complex to fully grasp? The discomfort is in not being able to answer this question. And the honesty is in admitting we may never be able to.

The Post-AlphaFold Future: From Solving to Interpreting

Machines solved the problem; humans must now learn to ask questions machines cannot frame.

Let me return one final time to the proteins.

After AlphaFold's triumph, structural biologists did not vanish. They adapted. Some became interpreters — people who could take AlphaFold's predictions and contextualize them within biological questions the algorithm could not frame. Some became curators — people who could identify where AlphaFold's predictions were less reliable and where experimental validation was still needed. Some became questioners — people who used the flood of new structural data to ask questions that would have been inconceivable when each structure took years to determine.

None of them went back to the old methods. You do not hand-compute a payroll after discovering spreadsheets. You do not navigate by the stars after acquiring GPS. And you do not spend two years crystallizing a protein whose structure a neural network can predict in seconds.

But the biologists who thrived in the post-AlphaFold world were not the ones who best understood the neural network. They were the ones who best understood biology — who could identify which questions mattered, which predictions to trust, which implications were significant and which were trivial. Their deep theoretical knowledge, far from being made obsolete, became the interpretive lens through which machine-generated knowledge was made useful. The irony is precise and, to Tyler's argument, crucial: the theory became more valuable as a way of reading results just as it became less valuable as a way of generating them. [C9]

Tyler does not quite say this about economics, but the implication is available to any reader willing to draw it. The marginalist toolkit — supply and demand, opportunity cost, thinking at the margin — may lose its role as the generative engine of economic knowledge. But it may gain a new role as the interpretive framework through which machine-generated economic knowledge becomes comprehensible to human minds and translatable into human decisions. We will still need to explain to voters why a tariff is costly, to students why prices coordinate information, to policymakers why incentives matter. The machines cannot give stump speeches. They cannot teach Econ 101. They cannot write op-eds that change minds, because changing minds requires meeting minds where they are, and human minds are still, for the moment, where the decisions get made.

For the moment.

Beyond the Marginal Revolution: What Endures, What Fades

A framework built to help humans think may survive as a tool to help humans interpret.

Tyler ends his discussion of AI and the future of economics in a register that is difficult to characterize — not optimistic, not pessimistic, but something more unusual.

He seems interested. Whatever is coming, it is not boring, and for Tyler Cowen, that may be the highest compliment available.

He quotes the economist Arnold Kling, who, after surveying the implications of AI for the social sciences, concluded his analysis with three words: "Have a nice day." The joke, if it is a joke, works because it captures the precise flavor of helplessness that accompanies genuine uncertainty. We do not know what economics will look like in twenty years. We do not know whether the discipline will be recognizable. We do not know whether the questions that animated Jevons and Menger and Walras — questions about value, about exchange, about the mysterious alchemy by which subjective desires become objective prices — will still be asked, or whether they will have been answered so thoroughly by machines that asking them will seem quaint, like debating how many angels can dance on the head of a pin. [C10]

What we do know is this: the marginal revolution was, at its core, a revolution in how human beings understood their own decision-making. It said that value is not intrinsic to objects but arises at the margin, in the particular circumstances of particular choices. It said that the next unit matters more than the average. It said that human behavior, however chaotic it appears in aggregate, is governed by a comprehensible logic at the level of individual choice.

For a century and a half, that insight was powerful enough to build a discipline around. It organized research, guided policy, won Nobel Prizes, and shaped how millions of people understood the economy they lived in. It was, by any reasonable standard, one of the most successful intellectual frameworks in the history of the social sciences.

And now it is being absorbed into something larger, something more powerful, something we do not yet fully understand. The marginal revolution is not ending. It is being metabolized — taken up into the body of a new kind of knowledge that will carry its insights forward in forms we cannot yet predict, embedded in systems we cannot yet interpret, serving purposes we cannot yet imagine.

The proteins fold according to laws we spent fifty years trying to articulate. The machine learned those laws in weeks, from data alone, and now predicts folds we never could have computed. The laws did not change. Our relationship to them did.

Contemporary economics was born from the marginal revolution. Whatever comes next will be born from this one.

The diamond-water paradox that opened this book asked a simple question: why do we value diamonds more than water? The marginalists answered it by looking at the margin — at the next unit, the particular circumstance, the subjective valuation of the individual chooser. It was a beautiful answer, and it was right. But the machines do not need to know it was right. They can predict which goods people will value, at what price, in what quantity, without ever encountering the concept of marginal utility. They have found another path to the same destination — a path we cannot walk, because it runs through dimensions of data our minds were never built to navigate.

Have a nice day.