In San Francisco last week, everyone’s favorite surprise visitor was Microsoft CEO Satya Nadella. 

At OpenAI’s DevDay—the company’s first-ever event for developers building on its platform—Nadella bounded on stage to join OpenAI CEO Sam Altman, blowing the hair back on an already electrified audience. “You guys have built something magic,” he gushed. 

Two days later on another stage, in another venue, at another developers’ conference, Nadella made his second unannounced appearance of the week—this time at GitHub Universe. There Thomas Dohmke, GitHub’s CEO, was showing off a new version of the company’s AI programming tool, Copilot, that can generate computer code from natural language. Nadella was effusive: “I can code again!” he exclaimed. 

Today, Nadella will be onstage speaking to developers at Microsoft Ignite, where the company is announcing even more AI-based developer tools, including an Azure AI Studio that will let devs choose between model catalogs from not only Microsoft, but also the likes of Meta, OpenAI, and Hugging Face, as well as new tools for customizing Copilot for Microsoft 365. 

If it seems like Nadella is obsessed with developers, you’re not wrong. He’s making the rounds to tout all the ways they can use a new generation of AI-powered tools, like GitHub Copilot (Microsoft acquired GitHub in 2018) or the new suite of developer tools from OpenAI, a company in which Microsoft has reportedly invested some $13 billion.

Last week, Nadella took a 20-minute break from all of his onstage appearances to sit down with MIT Technology Review to talk about (you guessed it) developers. He repeatedly emphasized Microsoft’s longstanding focus on developers. But he also had a message: The way we create software is fundamentally changing. 

Nadella believes a platform shift is underway, one that will prove just as significant as the shifts from mainframe to desktop or desktop to mobile. This time, the transition is to natural language AI tools, some of which he argues will lower the barrier to entry for software development, make existing developers more productive, and ultimately lead to a new era of creativity. 

We present Nadella in his own words, below. His remarks have been edited and condensed somewhat for readability.  


Related work from others:  Latest from Google AI - On-device content distillation with graph neural networks

One criticism of OpenAI is that its very business is only possible via Microsoft, which has given the startup billions of dollars and access to the resources it needs to power its computing-intensive language model. Yet Microsoft is also highly dependent on OpenAI’s technology to power services like GitHub Copilot, Bing, and Office 365. Altman even joked about the partnership onstage. We asked Nadella about this relationship.   

I’ve always felt that Microsoft is a platform-and-partner-first company, and this is not new to us. And so therefore, we both are effectively codependent, right? They depend on us to build the best systems, we depend on them to build the best models, and we go to market together. 


Nadella says this platform shift is different enough from previous ones that he feels the company needs to provide developers not only with tools, but also with a clear message about what it’s thinking and how devs can come along. 

Whenever you have a platform shift, the key thing is to make sure the platform is ubiquitously available for developers to build all kinds of new things. So to us, the most important task is to make the developer tools, the developer platforms, broadly available. 

The second thing is for us to also show the light, right? Whether it’s OpenAI building ChatGPT and then innovating on top of it, or us building Copilot and innovating on it. That will give developers an opportunity to distribute their applications. So the most important thing in any platform creation is to get the platform ubiquitously available, and then help developers reach [their] audience. 

Those are the two goals that we have across all of these [conferences].


Productivity gains in the United States have been sluggish for the past 15 or more years. The last huge platform shift—the rise of mobile development—did little to achieve widespread prosperity. Nadella says this time will be different, largely because the shift to AI will fuel a creative revolution by making it easy for anyone to generate new work, including code. 

On the other hand, coding today is a highly skilled, well-paid job, and there’s some concern that AI could effectively automate it. Nadella argues that skilled programmers will remain in demand, but that their jobs will change and even more jobs will become available. Nadella has said he envisions 1 billion developers creating on its platforms, many of them with little to no previous experience with coding.   

Related work from others:  Latest from MIT Tech Review - Open-sourcing generative AI

Anytime you have something as disruptive as this, you have to think about the displacement and causes. And that means it’s all about upskilling and reskilling, and in an interesting way, it’s more akin to what happened when word processors and spreadsheets started showing up. Obviously, if you were a typist, it really drastically changed. But at the same time, it enabled a billion people to be able to type into word processors and create and share documents.

I don’t think professional developers are going to be any less valuable than they are today. It’s just that we’re going to have many, many gradations of developers. Each time you’re prompting a Bing chat or ChatGPT, you’re essentially programming. The conversation itself is steering a model.

I think there will be many, many new jobs, there will be many, many new types of knowledge work, or frontline work, where the drudgery is removed.

I think the mobile era was fantastic. It made ubiquitous consumption of services. It didn’t translate into ubiquitous creation of services.

The last time there was a broad spread of productivity in the United States and beyond because of information technology was the [advent of the] PC. In fact, even the critics of information technology and productivity, like Robert Gordon of Northwestern, acknowledged that the PC, when it first showed up at work, did actually translate to broad productivity stats changes.

So that’s where I think this is, where these tools, like Copilot, being used by a [beginner] software engineer in Detroit, in order to be able to write [code].… I think we’ll have a real change in the productivity of the auto industry. Same thing in retail, same thing in frontline work and knowledge work.

The barrier to entry is very low. Because it’s natural language, domain experts can build apps or workflows. That, I think, is what’s the most exciting thing about this. This is not about just a consumption-led thing. This is not about elite creation. This is about democratized creation. I’m very, very hopeful that we’ll start seeing the productivity gains much more broadly.

Related work from others:  Latest from Google AI - Can large language models identify and correct their mistakes?


Numerous intellectual property cases and class action lawsuits are before the US courts over issues of fair use. At least one singles out GitHub Copilot specifically, claiming Microsoft and OpenAI’s generative tools, which are trained on open source code, amount to software piracy. There’s a fear that people who use these tools could be subject to intellectual property claims themselves. Microsoft is trying to address these issues with a broad indemnification policy. OpenAI also announced its own indemnification policy, Copyright Shield, at its DevDay conference. 

Fundamentally these large models crawl and get content and then train on that content, right? If anybody doesn’t want their content to be crawled, we have great granular controls in our crawlers that allow anybody to stop it from crawling. In fact, we have controls where you can have it crawl just for search, but not for large language model training. That’s available today. So anybody who really wants to ensure that their content is not being taken for retraining can do so today. 

The second thing, of course, is I think the courts and the legislative process in some combination will have to decide what is fair use and what is not fair use.

We have taken a lot of control in making sure that we are only training models, and we are using data to train models that we’re allowed to and which we believe we have a legal standing on. 

If it comes to it, we’ll litigate it in the courts. We’ll take that burden on so the users of our products don’t have to worry about it. That’s as simple as that, which is to take the liability and transfer it from our users to us. And of course, we are going to be very, very mindful of making sure we’re on the right side of the law there.

Similar Posts