The market is betting that AI is an unprecedented technology breakthrough, valuing Sam Altman and Jensen Huang like demigods already astride the world. The slow progress of enterprise AI adoption from pilot to production, however, still suggests at least the possibility of a less earthshaking future. Which is right?
At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present. Every day, news items land, and if you read them with a kind of soft focus, they slowly add up. Trends are vectors with both a magnitude and a direction, and by watching a series of data points light up those vectors, you can see possible futures taking shape.
This is how we’ve always identified topics to cover in our publishing program, our online learning platform, and our conferences. We watch what we call “the alpha geeks“: paying attention to hackers and other early adopters of technology with the conviction that, as William Gibson put it, “The future is here, it’s just not evenly distributed yet.” As a great example of this today, note how the industry hangs on every word from AI pioneer Andrej Karpathy, hacker Simon Willison, and AI for business guru Ethan Mollick.
We are also fans of a discipline called scenario planning, which we learned decades ago during a workshop with Lawrence Wilkinson about possible futures for what is now the O’Reilly learning platform. The point of scenario planning is not to predict any future but rather to stretch your imagination in the direction of radically different futures and then to identify “robust strategies” that can survive either outcome. Scenario planners also use a version of our “watching the alpha geeks” methodology. They call it “news from the future.”
Is AI an Economic Singularity or a Normal Technology?
For AI in 2026 and beyond, we see two fundamentally different scenarios that have been competing for attention. Nearly every debate about AI, whether about jobs, about investment, about regulation, or about the shape of the economy to come, is really an argument about which of these scenarios is correct.
Scenario one: AGI is an economic singularity. AI boosters are already backing away from predictions of imminent superintelligent AI leading to a complete break with all human history, but they still envision a fast takeoff of systems capable enough to perform most cognitive work that humans do today. Not perfectly, perhaps, and not in every domain immediately, but well enough, and improving fast enough, that the economic and social consequences will be transformative within this decade. We might call this the economic singularity (to distinguish it from the more complete singularity envisioned by thinkers from John von Neumann, I. J. Good, and Vernor Vinge to Ray Kurzweil).
In this possible future, we aren’t experiencing an ordinary technology cycle. We are experiencing the start of a civilization-level discontinuity. The nature of work changes fundamentally. The question is not which jobs AI will take but which jobs it won’t. Capital’s share of economic output rises dramatically; labor’s share falls. The companies and countries that master this technology first will gain advantages that compound rapidly.
If this scenario is correct, most of the frameworks we use to think about technology adoption are wrong, or at least inadequate. The parallels to previous technology transitions such as electricity, the internet, or mobile are misleading because they suggest gradual diffusion and adaptation. What’s coming will be faster and more disruptive than anything we’ve experienced.
Scenario two: AI is a normal technology. In this scenario, articulated most clearly by Arvind Narayanan and Sayash Kapoor of Princeton, AI is a powerful and important technology but nonetheless subject to all the normal dynamics of adoption, integration, and diminishing returns. Even if we develop true AGI, adoption will still be a slow process. Like previous waves of automation, it will transform some industries, augment many workers, displace some, but most importantly, take decades to fully diffuse through the economy.
In this world, AI faces the same barriers that every enterprise technology faces: integration costs, organizational resistance, regulatory friction, security concerns, training requirements, and the stubborn complexity of real-world workflows. Impressive demos don’t translate smoothly into deployed systems. The ROI is real but incremental. The hype cycle does what hype cycles do: Expectations crash before realistic adoption begins.
If this scenario is correct, the breathless coverage and trillion-dollar valuations are symptoms of a bubble, not harbingers of transformation.
Reading News from the Future
These two scenarios lead to radically different conclusions. If AGI is an economic singularity, then massive infrastructure investment is rational, and companies borrowing hundreds of billions to spend on data centers to be used by companies that haven’t yet found a viable economic model are making prudent bets. If AI is a normal technology, that spending looks like the fiber-optic overbuild of 1999. It’s capital that will largely be written off.
If AGI is an economic singularity, then workers in knowledge professions should be preparing for fundamental career transitions; firms should be thinking how to radically rethink their products, services, and business models; and societies should be planning for disruptions to employment, taxation, and social structure that dwarf anything in living memory.
If AI is normal technology, then workers should be learning to use new tools (as they always have), but the breathless displacement predictions will join the long list of automation anxieties that never quite materialized.
So, which scenario is correct? We don’t know yet, or even if this face off is the right framing of possible futures, but we do know that a year or two from now, we will tell ourselves that the answer was right there, in plain sight. How could we not have seen it? We weren’t reading the news from the future.
Some news is hard to miss: The change in tone of reporting in the financial markets, and perhaps more importantly, the change in tone from Sam Altman and Dario Amodei. If you follow tech closely, it’s also hard to miss news of real technical breakthroughs, and if you’re involved in the software industry, as we are, it’s hard to miss the real advances in programming tools and practices. There’s also an area that we’re particularly interested in, one which we think tells us a great deal about the future, and that is market structure, so we’re going to start there.
The Market Structure of AI
The economic singularity scenario has been framed as a winner-takes-all race for AGI that creates a massive concentration of power and wealth. The normal technology scenario suggests much more of a rising tide, where the technology platforms become dominant precisely because they create so much value for everyone else. Winners emerge over time rather than with a big bang.
Quite frankly, we have one big signal that we’re watching here: Does OpenAI, Anthropic, or Google first achieve product-market fit? By product-market fit we don’t just mean that users love the product or that one company has dominant market share but that a company has found a viable economic model, where what people are willing to pay for AI-based services is greater than the cost of delivering them.
OpenAI appears to be trying to blitzscale its way to AGI, building out capacity far in excess of the company’s ability to pay for it. This is a massive one-way bet on the economic singularity scenario, which makes ordinary economics irrelevant. Sam Altman has even said that he has no idea what his business will be post-AI or what the economy will look like. So far, investors have been buying it, but doubts are beginning to shape their decisions.
Anthropic is clearly in pursuit of product-market fit, and its success in one target market, software development, is leading the company on a shorter and more plausible path to profitability. Anthropic leaders talk AGI and economic singularity, but they walk the walk of a normal technology believer. The fact that Anthropic is likely to beat OpenAI to an IPO is a very strong normal technology signal. It’s also a good example of what scenario planners view as a robust strategy, good in either scenario.
Google gives us a different take on normal technology: an incumbent looking to balance its existing business model with advances in AI. In Google’s normal technology vision, AI disappears “into the walls” like networks did. Right now, Google is still foregrounding AI with AI overviews and NotebookLM, but it’s in a position to make it recede into the background of its entire suite of products, from Search and Google Cloud to Android and Google Docs. It has too much at stake in the current economy to believe that the route to the future consists in blowing it all up. That being said, Google also has the resources to place big bets on new markets with clear economic potential, like self-driving cars, drug discovery, and even data centers in space. It’s even competing with Nvidia, not just with OpenAI and Anthropic. This is also a robust strategy.
What to watch for: What tech stack are developers and entrepreneurs building on?
Right now, Anthropic’s Claude appears to be winning that race, though that could change quickly. Developers are increasingly not locked into a proprietary stack but are easily switching based on cost or capability differences. Open standards such as MCP are gaining traction.
On the consumer side, Google Gemini is gaining on ChatGPT in terms of daily active users, and investors are starting to question OpenAI’s lack of a plausible business model to support its planned investments.
These developments suggest that the key idea behind the massive investment driving AI boom, that one winner gets all the advantages, just doesn’t hold up.
Capability Trajectories
The economic singularity scenario depends on capabilities continuing to improve rapidly. The normal technology scenario is comfortable with limits rather than hyperscaled discontinuity. There is already so much to digest!
On the economic singularity side of the ledger, positive signs would include a capability jump that surprises even insiders, such as Yann LeCun’s objections being overcome. That is, AI systems demonstrably have world models, can reason about physics and causality, and aren’t just sophisticated pattern matchers. Another game changer would be a robotics breakthrough: embodied AI that can navigate novel physical environments and perform useful manipulation tasks.
Evidence that AI is normal technology include AI systems that are good enough to be useful but not good enough to be trusted, continuing to require human oversight that limits productivity gains; prompt injection and security vulnerabilities remain unsolved, constraining what agents can be trusted to do; domain complexity continues to defeat generalization, and what works in coding doesn’t transfer to medicine, law, science; regulatory and liability barriers prove high enough to slow adoption regardless of capability; and professional guilds successfully protect their territory. These problems may be solved over time, but they don’t just disappear with a new model release.
Regard benchmark performance with skepticism, since benchmarks are even more likely to be gamed when investors are losing enthusiasm than they are now, while everyone is still afraid of missing out.
Reports from practitioners actually deploying AI systems are far more important. Right now, tactical progress is strong. We see software developers in particular making profound changes in development workflows. Watch for whether they are seeing continued improvement or a plateau. Is the gap between demo and production narrowing or persisting? How much human oversight do deployed systems require? Listen carefully to reports from practitioners about what AI can actually do in their domain versus what it’s hyped to do.
We are not persuaded by surveys of corporate attitudes. Having lived through the realities of internet and open source software adoption, we know that, like Hemingway’s marvelous metaphor of bankruptcy, corporate adoption happens gradually, then suddenly, with late adopters often full of regret.
If AI is achieving general intelligence, though, we should see it succeed across multiple domains, not just the ones where it has obvious advantages. Coding has been the breakout application, but coding is in some ways the ideal domain for current AI. It’s characterized by well-defined problems, immediate feedback loops, formally defined languages, and massive training data. The real test is whether AI can break through in domains that are harder and farther away from the expertise of the people developing the AI models.
What to watch for: Real-world constraints start to bite. For example, what if there is not enough power to train or run the next generation of models at the scale company ambitions require? What if capital for the AI build-out dries up?
Our bet is that various real-world constraints will become more clearly recognized as limits to the adoption of AI, despite continued technical advances.
Bubble or Bust?
It’s hard not to notice how the narrative in the financial press has shifted in the past few months, from mindless acceptance of industry narratives to a growing consensus that we are in the throes of a massive investment bubble, with the chief question on everyone’s mind seeming to be when and how it will pop.
The current moment does bear uncomfortable similarities to previous technology bubbles. Famed short investor Michael Burry is comparing Nvidia to Cisco and warning of a worse crash than the dot-com bust of 2000. The circular nature of AI investment—in which Nvidia invests in OpenAI, which buys Nvidia chips; Microsoft invests in OpenAI, which pays Microsoft for Azure; and OpenAI commits to massive data center build-outs with little evidence that it will ever have enough profit to justify those commitments—has reached levels that would be comical if the numbers weren’t so large.
But there’s a counterargument: Every transformative infrastructure build-out begins with a bubble. The railroads of the 1840s, the electrical grid of the 1900s, the fiber-optic networks of the 1990s all involved speculative excess, but all left behind infrastructure that powered decades of subsequent growth. One question is whether AI infrastructure is like the dot-com bubble (which left behind useful fiber and data centers) or the housing bubble (which left behind empty subdivisions and a financial crisis).
The real question when faced with a bubble is What will be the source of value in what is left? It most likely won’t be in the AI chips, which have a short useful life. It may not even be in the data centers themselves. It may be in a new approach to programming that unlocks entirely new classes of applications. But one pretty good bet is that there will be enduring value in the energy infrastructure build-out. Given the Trump administration’s war on renewable energy, the market demand for energy in the AI build-out may be its saving grace. A future of abundant, cheap energy rather than the current fight for access that drives up prices for consumers could be a very nice outcome.
Signs pointing toward economic singularity: Sustained high utilization of AI infrastructure (data centers, GPU clusters) over multiple years; actual demand meets or exceeds capacity; major new applications emerge that just couldn’t exist without AI; continued spiking of energy prices, especially in areas with many data centers.
Signs pointing toward bubble: Continued reliance on circular financing structures (vendor financing, equity swaps between AI companies); enterprise AI projects stall in the pilot phase, failing to scale; a “show me the money” moment arrives, where investors demand profitability and AI companies can’t deliver.
Signs pointing towards normal technology recovery postbubble: Strong revenue growth at AI application companies, not just infrastructure providers; enterprises report concrete, measurable ROI from AI deployments.
What to watch: There are so many possibilities that this is an act of imagination! Start with Wile E. Coyote running over a cliff in pursuit of Road Runner in the classic Warner Brothers cartoons. Imagine the moment when investors realize that they are trying to defy gravity.
What made them notice? Was it the failure of a much-hyped data center project? Was it that it couldn’t get financing, that it couldn’t get completed because of regulatory constraints, that it couldn’t get enough chips, that it couldn’t get enough power, that it couldn’t get enough customers?
Imagine one or more storied AI lab or startup unable to complete its next fundraise. Imagine Oracle or SoftBank trying to get out of a big capital commitment. Imagine Nvidia announcing a revenue miss. Imagine another DeepSeek moment coming out of China.
Our bet for the most likely prick to pop the bubble is that Anthropic and Google’s success against OpenAI persuades investors that OpenAI will not be able to pay for the massive amount of data center capacity it has contracted for. Given the company’s centrality to the AGI singularity narrative, a failure of belief in OpenAI could bring down the whole web of interconnected data center bets, many of them financed by debt. But that’s not the only possibility.
Always Update Your Priors
DeepSeek’s emergence in January was a signal that the American AI establishment may not have the commanding lead it assumed. Rather than racing for AGI, China seems to be heavily betting on normal technology, building towards low-cost, efficient AI, industrial capacity, and clear markets. While claims about what DeepSeek spent on training its V3 model have been contested, training isn’t the only cost: There’s also the cost of inference and, for increasingly popular reasoning models, the cost of reasoning. And when these are taken into account, DeepSeek is very much a leader.
If DeepSeek and other Chinese AI labs are right, the US may be intent on winning the wrong race. What’s more, our conversations with Chinese AI investors reveals a much heavier tilt towards embodied AI (robotics and all its cousins) than towards consumer or even enterprise applications. Given the geopolitical tensions between China and the US, it’s worth asking what kind of advantage a GPT-9 with limited access to the real world might provide against an army of drones and robots powered by the equivalent of GPT-8!
The point is that the discussion above is meant to be provocative, not exhaustive. Expand your horizons. Think about how US and international politics, advances in other technologies, and financial market impacts ranging from a massive market collapse to a simple change in investor priorities might change industry dynamics.
What you’re watching for is not any single data point but the pattern across multiple vectors over time. Remember that the AGI versus normal technology framing is not the only or maybe even the most useful way to look at the future.
The most likely outcome, even restricted to these two hypothetical scenarios, is something in between. AI may achieve something like AGI for coding, text, and video while remaining a normal technology for embodied tasks and complex reasoning. It may transform some industries rapidly while others resist for decades. The world is rarely as neat as any scenario.
But that’s precisely why the “news from the future” approach matters. Rather than committing to a single prediction, you stay alert to the signals, ready to update your thinking as evidence accumulates. You don’t need to know which scenario is correct today. You need to recognize which scenario is becoming correct as it happens.
What If? Robust Strategies in the Face of Uncertainty
The second part of scenario planning is to identify robust strategies that will help you do well regardless of which possible future unfolds. In this final section, as a way of making clear what we mean by that, we’ll consider 10 “What if?” questions and ask what the robust strategies might be.
1. What if the AI bubble bursts in 2026?
The vector: We are seeing massive funding rounds for AI foundries and massive capital expenditure on GPUs and data centers without a corresponding explosion in revenue for the application layer.
The scenario: The “revenue gap” becomes undeniable. Wall Street loses patience. Valuations for foundational model companies collapse and the river of cheap venture capital dries up.
In this scenario, we would see responses like OpenAI’s “Code Red” reaction to improvements in competing products. We would see declines in prices for stocks that aren’t yet traded publicly. And we might see signs that the massive fundraising for data centers and power are performative, not backed by real capital. In the words of one commenter, they are “bragawatts.”
A robust strategy: Don’t build a business model that relies on subsidized intelligence. If your margins only work because VC money is paying for 40% of your inference costs, you are vulnerable. Focus on unit economics. Build products where the AI adds value that customers are willing to pay for now, not in a theoretical future where AI does everything. If the bubble bursts, infrastructure will remain, just as the dark fiber did, becoming cheaper for the survivors to use.
2. What if energy becomes the hard limit?
The vector: Data centers are already stressing grids. We are seeing a shift from the AI equivalent of Moore’s law to a world where progress may be limited by energy constraints.
The scenario: In 2026, we hit a wall. Utilities simply cannot provision power fast enough. Inference becomes a scarce resource, available only to the highest bidders or those with private nuclear reactors. Highly touted data center projects are put on hold because there isn’t enough power to run them, and rapidly depreciating GPUs are put in storage because there aren’t enough data centers to deploy them.
A robust strategy: Efficiency is your hedge. Stop treating compute as infinite. Invest in small language models (SLMs) and edge AI that run locally. If you can run 80% of your workload on a laptop-grade chip rather than an H100 in the cloud, you are at least partially insulated from the energy crunch.
3. What if inference becomes a commodity?
The vector: Chinese labs continue to release open weight models with performance comparable to each previous generation of top-of-the line US frontier models but at a fraction of the training and inference cost. What’s more, they are training them with lower-cost chips. And it appears to be working.
The scenario: The price of “intelligence” collapses to near zero. The moat of having the biggest model and the best cutting-edge chips for training evaporates.
A robust strategy: Move up the stack. If the model is a commodity, the value is in the integration, the data, and the workflow. Build applications and services using the unique data, context, and workflows that no one else has.
4. What if Yann LeCun is right?
The vector: LeCun has long argued that auto-regressive LLMs are an “off-ramp” on the highway to AGI because they can’t reason or plan; they only predict the next token. He bets on world models (JEPA). OpenAI cofounder Ilya Sutskever has also argued that the AI industry needs fundamental research to solve basic problems like the ability to generalize.
The scenario: In 2026, LLMs hit a plateau. The market realizes we’ve spent billions on a dead end technology for true AGI.
A robust strategy: Diversify your architecture. Don’t bet the farm on today’s AI. Focus on compound AI systems that use LLMs as just one component, while relying on deterministic code, databases, and small, specialized models for additional capabilities. Keep your eyes and your options open.
5. What if there is a major security incident?
The vector: We are currently hooking insecure LLMs up to banking APIs, email, and purchasing agents. Security researchers have been screaming about indirect prompt injection for years.
The scenario: A worm spreads through email auto-replies, tricking AI agents into transferring funds or approving fraudulent invoices at scale. Trust in agentic AI collapses.
A robust strategy: “Trust but verify” is dead; use “verify then trust.” Implement well-known security practices like least privilege (restrict your agents to the minimal list of resources they need) and zero trust (require authentication before every action). Stay on top of OWASP’s lists of AI vulnerabilities and mitigations. Keep a “human in the loop” for high-stakes actions. Advocate for and adopt standard AI disclosure and audit trails. If you can’t trace why your agent did something, you shouldn’t let it handle money.
6. What if China is actually ahead?
The vector: While the US focuses on raw scale and chip export bans, China is focusing on efficiency and embedded AI in manufacturing, EVs, and consumer hardware.
The scenario: We discover that 2026’s “iPhone moment” comes from Shenzhen, not Cupertino, because Chinese companies integrated AI into hardware better while we were fighting over chatbot and agentic AI dominance.
A robust strategy: Look globally. Don’t let geopolitical narratives blind you to technical innovation. If the best open source models or efficiency techniques are coming from China, study them. Open source has always been the best way to bridge geopolitical divides. Keep your stack compatible with the global ecosystem, not just the US silo.
7. What if robotics has its “ChatGPT moment”?
The vector: End-to-end learning for robots is advancing rapidly.
The scenario: Suddenly, physical labor automation becomes as possible as digital automation.
A robust strategy: If you are in a “bits” business, ask how you can bridge to “atoms.” Can your software control a machine? How might you embody useful intelligence into your products?
8. What if vibe coding is just the start?
The vector: Anthropic and Cursor are changing programming from writing syntax to managing logic and workflow. Vibe coding lets nonprogrammers build apps by just describing what they want.
The scenario: The barrier to entry for software creation drops to zero. We see a Cambrian explosion of apps built for a single meeting or a single family vacation. Alex Komoroske calls it disposable software: “Less like canned vegetables and more like a personal farmer’s market.”
A robust strategy: In a world where AI is good enough to generate whatever code we ask for, value shifts to knowing what to ask for. Coding is much like writing: Anyone can do it, but some people have more to say than others. Programming isn’t just about writing code; it’s about understanding problems, contexts, organizations, and even organizational politics to come up with a solution. Create systems and tools that embody unique knowledge and context that others can use to solve their own problems.
9. What if AI kills the aggregator business model?
The vector: Amazon and Google make money by being the tollbooth between you and the product or information you want. If people get answers from AI, or an AI agent buys for you, it bypasses the ads and the sponsored listings, undermining the business model of internet incumbents.
The scenario: Search traffic (and ad revenue) plummets. Brands lose their ability to influence consumers via display ads. AI has destroyed the source of internet monetization and hasn’t yet figured out what will take its place.
A robust strategy: Own the customer relationship directly. If Google stops sending you traffic, you need an MCP, an API, or a channel for direct brand loyalty that an AI agent respects. Make sure your information is accessible to bots, not just humans. Optimize for agent readability and reuse.
10. What if a political backlash arrives?
The vector: The divide between the AI rich and those who fear being replaced by AI is growing.
The scenario: A populist movement targets Big Tech and AI automation. We see taxes on compute, robot taxes, or strict liability laws for AI errors.
A robust strategy: Focus on value creation, not value capture. If your AI strategy is “fire 50% of the support staff,” you are not only making a shortsighted business decision; you are painting a target on your back. If your strategy is “supercharge our staff to do things we couldn’t do before,” you are building a defensible future. Align your success with the success of both your workers and customers.
In Conclusion
The future isn’t something that happens to us; it’s something we create. The most robust strategy of all is to stop asking “What will happen?” and start asking “What future do we want to build?”
As Alan Kay once said, “The best way to predict the future is to invent it.” Don’t wait for the AI future to happen to you. Do what you can to shape it. Build the future you want to live in.