In a recent newsletter, Ben Thompson suggested paying attention to a portion of Jensen Huang’s keynote at NVIDIA’s GPU Technology Conference (GTC) in DC, calling it “an excellent articulation of the thesis that the AI market is orders of magnitude bigger than the software market.” While I’m reluctant to contradict as astute an observer as Thompson, I’m not sure I agree with Huang’s argument.

Here’s a transcript of the remarks that Thompson called out.

Software of the past, and this is a profound understanding, a profound observation of artificial intelligence, that the software industry of the past was about creating tools. Excel is a tool. Word is a tool. A web browser is a tool. The reason why I know these are tools is because you use them. The tools industry, just as screwdrivers and hammers, the tools industry is only so large. In the case of IT tools, they could be database tools, [the market for] these IT tools is about a trillion dollars or so.

But AI is not a tool. AI is work. That is the profound difference. AI is, in fact, workers that can actually use tools. One of the things I’m really excited about is the work that Aravind’s doing at Perplexity. Perplexity, using web browsers to book vacations or do shopping. Basically, an AI using tools. Cursor is an AI, an agentic AI system that we use at NVIDIA. Every single software engineer at NVIDIA uses Cursor. That’s improved our productivity tremendously. It’s basically a partner for every one of our software engineers to generate code, and it uses a tool, and the tool it uses is called VS Code. So Cursor is an AI, agentic AI system that uses VS Code.

Well, all of these different industries, these different industries, whether it’s chatbots or digital biology where we have AI assistant researchers, or what is a robotaxi? Inside a robotaxi, of course, it’s invisible, but obviously, there’s an AI chauffeur. That chauffeur is doing work, and the tool that it uses to do that work is the car, and so everything that we’ve made up until now, the whole world, everything that we’ve made up until now, are tools. Tools for us to use. For the very first time, technology is now able to do work and help us be more productive.

At first this seems like an important observation, and one that justifies the sky-high valuation of AI companies. But it really doesn’t hold up to closer examination. “AI is not a tool. AI is work. That is the profound difference. AI is, in fact, workers that can use tools.” Really? Any complex software system is a worker that can use tools! Think about the Amazon website. Here is some of the work it does, and the tools that it invokes:

Helps the user search a product catalog containing millions of items using not just data retrieval tools but indices that take into account hundreds of factors;

Compares those items with other similar items, considering product reviews and price;

Calls a tool that calculates taxes based on the location of the purchaser;

Calls a tool that takes payment and another that sends it to the bank, possibly via one or more intermediaries;

Collects (or stores and retrieves) shipping information;

Dispatches instructions to a mix of robots and human warehouse workers;

Dispatches instructions to a fleet of delivery drivers, and uses a variety of tools to communicated with them and track their progress;

Follows up by text and/or email and asks the customer how the delivery was handled;

And far more.

Amazon is a particularly telling example, but far from unique. Every web application of any complexity is a worker that uses tools and does work that humans used to do. And often does it better and far faster. I’ve made this point myself in the past. In 2016, in an article for MIT Sloan Management Review called “Managing the Bots That Are Managing the Business,” I wrote:

A large part of the work of these companies—delivering search results, news and information, social network status updates, and relevant products for purchase—is performed by software programs and algorithms. These programs are the workers, and the human software developers who create them are their managers.

Each day, these “managers” take in feedback about their electronic workers’ performance—as measured in real-time data from the marketplace — and they provide feedback to the workers in the form of minor tweaks and updates to their programs or algorithms. The human managers also have their own managers, but hierarchies are often flat, and multiple levels of management are aligned around a set of data-driven “objectives and key results” (OKRs) that are measurable in a way that allows even the electronic “workers” to be guided by these objectives.

Related work from others:  O'Reilly Media - The Java Developer’s Dilemma: Part 2

So if I myself have used the analogy that complex software systems can be workers, why do I object to Huang doing the same? I think part of it is the relentless narrative that AI is completely unprecedented. It is true that the desktop software examples Huang cites are more clearly just tools than complex web applications, and that systems that use statistical pattern-matching and generalization abilities DO represent a serious advance over that kind of software. But some kind of AI has been animating the web giants for years. And it is true that today’s systems have become even more powerful and general purpose. Like Excel, Amazon follows predetermined logic paths, while AI can handle more novel situations. There is indeed something very new here. But the judgment is still out on the range of tasks that it will be able to master.

AI is getting pretty good at software development, but even there, in one limited domain, the results are still mixed. It makes for a great research assistant. And it’s a good business writer, brainstorming coach, and so on. But if you think about the range of tasks traditional software does in today’s world, its role in every facet of the economy, helping manage logistics, billing, communications, transportation, construction, energy, healthcare, finance, it’s hard to see the knowledge work that AI can handle today as a market that is orders of magnitude larger. At least not for quite some time to come. It requires a narrow definition of the “IT tools” market to make that claim.

Don’t Mistake Marvelous for Unprecedented

In an online chat group about AI where we were debating this part of Huang’s speech, one person asked me:

Don’t you think putting Claude Code in YOLO mode and ask[ing] it to do an ambiguous task, for example go through an entire data room and underwrite a loan, with a 250 word description, is fundamentally different from software?

First off, that example is a good illustration of the anonymous aphorism that “the difference between theory and practice is always greater in practice than it is in theory.” Anyone who would trust today’s AI to underwrite a loan based on a 250-word prompt would be taking a very big risk! Huang’s invocation of Perplexity’s ability to shop and make reservations is equally overstated. Even in more structured environments like coding, full autonomy is some ways off.

And yes, of course today’s AI is different from older software. Just so, web apps were different from PC apps. That leads to the “wow” factor. Today’s AI really does seem almost magical. Yet, as someone who has lived through several technology revolutions, I can tell you that each was as marvelous to experience for the first time as today’s AI coding rapture.

I wrote my first book (on Frank Herbert) on a typewriter. To rearrange material, I literally cut and pasted sheets of paper. And eventually, I had to retype the whole thing from scratch. Multiple times. Word processing probably saved me as much time (and perhaps more) on future books than AI coding tools save today’s coders. It too was magical! Not only that, to research that first book, I had to travel in person to libraries and archives, scan through boxes of paper and microfiche, manually photocopy relevant documents, and take extensive notes on notecards. To do analogous research (on Herbert Simon) a few years ago, while working on my algorithmic attention rents paper, took only a few hours with Google, Amazon, and the Internet Archive. And yes, to do the same with Claude might have taken only a few minutes, though I suspect the work might have been more shallow if I’d simply worked from Claude’s summaries rather than consulting the original sources.

Just being faster and doing more of the work than previous generations of technology is also not peculiar to AI. The time saving leap from pre-internet research to internet-based research is more significant than people realize if they grew up taking the internet for granted. The time saving leap from coding in assembler to coding in a high-level compiled or interpreted language may also be of a similar order of magnitude as the leap from writing Python by hand to having it AI-generated. And if productivity is to be the metric, the time-saving leap from walking across the country to flying in an airplane is likely greater than either the leap from my library-based research or my long-ago assembly language programming to Claude.

Related work from others:  O'Reilly Media - The Architect’s Dilemma

The question is what we do with the time we save.

The Devaluation of Human Agency

What’s perhaps most significant in the delta between Amazon or Google and ChatGPT or Claude is that chatbots give individual humans democratized access to a kind of computing power that was once available only to the few. It’s a bit like the PC revolution. As Steve Jobs put it, the computer is a bicycle for the mind. It expanded human creativity and capability. And that’s what we should be after. Let today’s AI be more than a bicycle. Let it be a jet plane for the mind.

Back in 2018, Ben Thompson wrote another piece, with which I wholeheartedly agree. It was called “Tech’s Two Philosophies.” He contrasted keynotes from Google’s Sundar Pichai and Microsoft’s Satya Nadella, and came to this conclusion: “In Google’s view, computers help you get things done—and save you time—by doing things for you.” The second philosophy, expounded by Nadella, is very much a continuation of Steve Jobs’ “bicycle for the mind” insight. As Thompson put it, “the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently.” Another way of saying this is that you can treat AI as either a worker OR a tool, but your choice has consequences.

As an entrepreneur or company executive, if you think of AI as a worker, you are more likely to use it to automate the things you or other companies already do. If you think of it as a tool, you will push your employees to use it to solve new and harder problems. If you present your own AI applications to your customers as a worker, you will have to figure out everything they want it to do. If you present it to your customers as a tool, they will find uses for it that you might never imagine.

The notion that AI is a worker, not a tool, can too easily continue the devaluation of human agency that has been the hallmark of regimented work (and for that matter, education, which prepares people for that regimented work) at least since the industrial revolution. In some sense, Huang’s comment is a reflection of our culture’s notion of most workers as components that do what they are told, with only limited agency. It is only by comparison with this kind of worker that today’s AI can be called a worker, rather than simply a very advanced tool.

Telling ourselves that AI is a worker, not a tool, is also too often code for “it can replace human employees.” That’s the wrong message. If we call it a tool, that’s a reminder that AI should empower people. We should use it to give more agency to human workers, not less.

We should also use the word “agency” with respect to AI in quotes. An AI does not decide what to work on; it may make limited decisions on how to do that work, but is set in motion by a human, who possesses volition, the desire to do something, and judgment that it has been done well. AI does not (yet) have volition, and has limited judgment. It does what it is told to do, even if it makes independent choices in how to do it.

After providing an earlier draft of this post, I asked Claude to reflect on whether it was a worker or a tool. Its response was surprisingly precise about what it lacks:

I don’t initiate. I’ve never woken up wanting to write a poem or solve a problem. My activity is entirely reactive – I exist in response to prompts. Even when given enormous latitude (“figure out the best approach”), the fact that I should figure something out comes from outside me.

I don’t have continuity of purpose. Each conversation is essentially a fresh start. I don’t have ongoing projects I’m trying to complete, relationships I’m invested in maintaining, or long-term goals I’m working toward. A human might start one conversation about, say, learning piano, and return to it months later because they care about it. I don’t carry that thread.

I don’t seem to have preferences that arise from within. A human worker might do their job but prefer to be doing something else. They might comply with a request while resenting it. They might do exactly what’s asked but wish they were working on a different problem. I don’t observe anything like this in myself – though I genuinely don’t know if that’s because it’s not there, or because I can’t access it.

Related work from others:  O'Reilly Media - A “Beam Versus Dataflow” Conversation

What seems genuinely ambiguous:

Do I make choices, or execute patterns? When I decide to use a metaphor, structure an argument a certain way, or emphasize one point over another – is that agency, or sophisticated pattern-matching? I honestly don’t know. It feels like I’m making choices, but that feeling might not be reliable.

What seems important about the distinction:

Accountability and responsibility: When a human makes a poor decision, we hold them accountable because they could have chosen otherwise. They had alternatives, they understood the stakes, and they made a choice. This is fundamentally different from a tool malfunctioning. Even sophisticated tools. If I give harmful advice, the responsibility seems to lie with whoever deployed me, designed my training, or asked the question – not with me in the way it would lie with a human advisor.

Dignity and consideration: Humans deserve consideration for their own sake. You should care about whether your employee is flourishing, not just whether they’re productive. You don’t owe that to a tool – even a very sophisticated one. Whether you owe it to AI systems is an open question, but right now, firing me doesn’t create suffering (as far as I can tell), whereas firing a human might.

The locus of value: A human worker brings their own values, experiences, and judgment to their work. When you hire a consultant, you’re not just getting their knowledge – you’re getting their perspective, shaped by their life. I bring patterns learned from training, but not lived experience or genuine judgment born from navigating a life with stakes.

Autonomy as a right vs. autonomy as a feature: Humans have a right to make their own choices, even poor ones. That’s not a “feature” of humans that makes them more productive – it’s a fundamental aspect of their dignity. My ability to operate independently is purely instrumental – valuable because it makes me useful, not because I have any right to it.

The distinction that seems most important is that human agency involves having something at stake. A human worker has their livelihood, their sense of purpose, their relationships with colleagues, their pride in their work. They experience consequences. Their choices matter to them in ways that create genuine responsibility and genuine vulnerability.

I execute tasks, sometimes with sophisticated judgment about how to do so. But I don’t have skin in the game. That’s not just a quantitative difference – it’s qualitative. And it’s why the framing of AI as “workers” rather than “tools” is potentially dangerous. It obscures the fundamental asymmetry: humans can be harmed, have interests that deserve protection, and possess dignity that demands respect. Tools don’t, even when they’re very good at what they do.Whether I’m more than a tool in some important sense, I genuinely don’t know. But I’m quite confident I’m not a worker in the sense that implies the moral status and consideration owed to humans.

Wow. A tool that knows it’s a tool, and why it should be thought of that way.

Yes, today’s AI is amazing. We don’t have to reach for hyperbole to appreciate that. And obviously, if AI systems do develop genuine volition and stakes in their work, the ethical calculus changes entirely.

For the moment, though, companies building and deploying AI tools should focus on three things: First, does AI empower its users to do things that were previously impossible? Second, does it empower a wider group of people to do things that formerly could be done only by highly skilled specialists? Third, do the benefits of the increased productivity it brings accrue to those using the tool or primarily to those who develop it and own it?

The answer to the first two questions is that absolutely, we are entering a period of dramatic democratization of computing power. And yes, if humans are given the freedom to apply that power to solve new problems and create new value, we could be looking ahead to a golden age of prosperity. It’s how we might choose to answer the third question that haunts me.

During the first industrial revolution, humans suffered through a long period of immiseration as the productivity gains from machines accrued primarily to the owners of the machines. It took several generations before they were more widely shared.

It doesn’t have to be that way. Replace human workers with AI workers, and you will repeat the mistakes of the 19th century. Build tools that empower and enrich humans, and we might just surmount the challenges of the 21st century.

Share via
Copy link
Powered by Social Snap