Decision Markets for AI
Summary: AI models are expensive to train. As a result, when model designers propose novel architectures, they must guess which, among competing architectures, will perform best. There is no well-formulated process for performing this high-stakes guess. A decision market could help groups decide which model architecture to train. By forcing participants to place bets on how well architectures will perform on specific benchmarks, decision markets will incentivize market participants to create falsifiable expectations about model performance, rewarding participants who make accurate predictions. This design should more efficiently select among candidate models, thus more efficiently allocate capital in a way that advances frontier AI models.
—
In the race to develop advanced AI systems, one of the most pressing challenges isn't computational power or algorithmic innovation—it's deciding what to build.
How do organizations choose which models to train? Traditional funding mechanisms, for AI and for science in general, often rely on centralized decision-making bodies. Decision markets, in contrast, create incentives for those with relevant knowledge to reveal it through their betting behavior.
The Cost of Uncertainty
Consider what happens when an AI lab contemplates training a new large language model. They might have several competing architectural proposals. One team advocates for a sparse attention mechanism. Another, a novel approach to context compression. A third advocates for an experimental mixing of experts.
Each proposal comes with compelling theoretical justifications, but only one can be trained—at least initially. At a firm like OpenAI, for example, scientists may propose a few model architectures; a manager, or management team, signs off on the training, irrespective of their understanding of the technologies or math behind it! Managers may perform well when architectures are similar to the ones they were trained on, but the field moves fast, and their “accuracy” at picking out the best models may wane over time.
The wrong choice could mean months of wasted compute and millions in sunk costs.
Enter information finance
A decision market could help resolve this dilemma. Here's how it works: when a team proposes a new model architecture, they also specify concrete performance benchmarks—for instance, “this architecture will achieve 95% accuracy on HellaSwag while maintaining a throughput of at least 280 tokens per second.” Rather than immediately committing resources to training, the organization opens a prediction market where participants can buy shares representing “yes” or “no” positions on whether the model will meet these benchmarks.
This structure creates a productive dynamic: participants must put real stakes behind their architectural intuitions. A researcher confident in a particular attention mechanism must translate that confidence into a specific bet about performance metrics. The market aggregates these informed guesses into a probability estimate—if enough participants bet "yes," exceeding a predetermined threshold (say 80%), the model moves forward to training.
Like any market, this approach harnesses distributed knowledge. Different participants might have different reasons for their confidence: some might have run smaller-scale experiments, others might have theoretical insights about the architecture's properties, and still others might have relevant experience with similar approaches. The market mechanism combines these diverse perspectives into a single signal.
But the beauty of this approach is not that it enhances accuracy per se—it’s that it creates an incentive for market participants to get better at guessing what models to invest in. This creates a feedback loop that rewards accurate prediction and punishes overconfidence.
Beyond AI
This approach could extend beyond AI development. This decision market is, in essence, a form of participatory budgeting that relies on falsifiable assumptions about the world. It could be applied to any domain where:
- Decisions are expensive
- Outcomes are measurable
- Different stakeholders have distributed knowledge about likely outcomes
Think of scientific research funding, infrastructure projects, or even corporate R&D portfolios. The key insight is that by forcing predictions to be concrete and creating real stakes for those predictions, we can more efficiently allocate capital toward promising ventures. Of course, this approach is not without pitfalls. Market participants might try to game the system by proposing easily achievable benchmarks. DAO governance would have to be carefully designed to avoid such gamesmanship. Then there’s the question of who gets to participate in these markets. Should it be limited to technical experts, or open to anyone willing to stake capital? Only active experimentation could answer that question. Careful market design, along with possible oversight from a governing body, would be critical.
A Hayekian Solution to Innovation Funding
By creating a market mechanism for architectural decisions, we're not just making better predictions—we're creating a new way to coordinate complex technical decisions in an environment of radical uncertainty. This approach to innovation funding captures what Hayek called "the knowledge of the particular circumstances of time and place." It's a solution that acknowledges both the distributed nature of technical knowledge and the need for efficient capital allocation in advancing frontier AI capabilities.
What this article describes is more than a new kind of prediction market. Like everything we do at Layer, it's a new kind of infrastructure for collaborative decision-making under uncertainty. In a field as complex and fast-moving as AI development, that might be exactly what we need.