This essay is part of MIT Technology Review’s 2023 Innovators Under 35 package. Meet this year’s honorees.

Innovation is a powerful engine for uplifting society and fueling economic growth. Antibiotics, electric lights, refrigerators, airplanes, smartphones—we have these things because innovators created something that didn’t exist before. MIT Technology Review’s Innovators Under 35 list celebrates individuals who have accomplished a lot early in their careers and are likely to accomplish much more still. 

Having spent many years working on AI research and building AI products, I’m fortunate to have participated in a few innovations that made an impact, like using reinforcement learning to fly helicopter drones at Stanford, starting and leading Google Brain to drive large-scale deep learning, and creating online courses that led to the founding of Coursera. I’d like to share some thoughts about how to do it well, sidestep some of the pitfalls, and avoid building things that lead to serious harm along the way.

AI is a dominant driver of innovation today

As I have said before, I believe AI is the new electricity. Electricity revolutionized all industries and changed our way of life, and AI is doing the same. It’s reaching into every industry and discipline, and it’s yielding advances that help multitudes of people.

AI—like electricity—is a general-­purpose technology. Many innovations, such as a medical treatment, space rocket, or battery design, are fit for one purpose. In contrast, AI is useful for generating art, serving web pages that are relevant to a search query, optimizing shipping routes to save fuel, helping cars avoid collisions, and much more. 

The advance of AI creates opportunities for everyone in all corners of the economy to explore whether or how it applies to their area. Thus, learning about AI creates disproportionately many opportunities to do something that no one else has ever done before.

For instance, at AI Fund, a venture studio that I lead, I’ve been privileged to participate in projects that apply AI to maritime shipping, relationship coaching, talent management, education, and other areas. Because many AI technologies are new, their application to most domains has not yet been explored. In this way, knowing how to take advantage of AI gives you numerous opportunities to collaborate with others. 

Looking ahead, a few developments are especially exciting.

Prompting: While ChatGPT has popularized the ability to prompt an AI model to write, say, an email or a poem, software developers are just beginning to understand that prompting enables them to build in minutes the types of powerful AI applications that used to take months. A massive wave of AI applications will be built this way. 

Vision transformers: Text trans­formers—language models based on the transformer neural network architecture, which was invented in 2017 by Google Brain and collaborators—have revolutionized writing. Vision transformers, which adapt transformers to computer vision tasks such as recognizing objects in images, were introduced in 2020 and quickly gained widespread attention. The buzz around vision transformers in the technical community today reminds me of the buzz around text transformers a couple of years before ChatGPT. A similar revolution is coming to image processing. Visual prompting, in which the prompt is an image rather than a string of text, will be part of this change.

Related work from others:  Latest from MIT : Unpacking black-box models

AI applications: The press has given a lot of attention to AI’s hardware and software infrastructure and developer tools. But this emerging AI infrastructure won’t succeed unless even more valuable AI businesses are built on top of it. So even though a lot of media attention is on the AI infrastructure layer, there will be even more growth in the AI application layer. 

These areas offer rich opportunities for innovators. Moreover, many of them are within reach of broadly tech-savvy people, not just people already in AI. Online courses, open-source software, software as a service, and online research papers give everyone tools to learn and start innovating. But even if these technologies aren’t yet within your grasp, many other paths to innovation are wide open.

Be optimistic, but dare to fail 

That said, a lot of ideas that initially seem promising turn out to be duds. Duds are unavoidable if you take innovation seriously. Here are some projects of mine that you probably haven’t heard of, because they were duds: 

I spent a long time trying to get aircraft to fly autonomously in formation to save fuel (similar to birds that fly in a V formation). In hindsight, I executed poorly and should have worked with much larger aircraft.

I tried to get a robot arm to unload dishwashers that held dishes of all different shapes and sizes. In hindsight, I was much too early. Deep-learning algorithms for perception and control weren’t good enough at the time.  

About 15 years ago, I thought that unsupervised learning (that is, enabling machine-learning models to learn from unlabeled data) was a promising approach. I mistimed this idea as well. It’s finally working, though, as the availability of data and computational power has grown.

It was painful when these projects didn’t succeed, but the lessons I learned turned out to be instrumental for other projects that fared better. Through my failed attempt at V-shape flying, I learned to plan projects much better and front-load risks. The effort to unload dishwashers failed, but it led my team to build the Robot Operating System (ROS), which became a popular open-source framework that’s now in robots from self-driving cars to mechanical dogs. Even though my initial focus on unsupervised learning was a poor choice, the steps we took turned out to be critical in scaling up deep learning at Google Brain.

Related work from others:  Latest from MIT : Teaching AI to ask clinical questions

Society has a deep interest in the fruits of innovation. And that is a good reason to approach innovation with optimism.

Innovation has never been easy. When you do something new, there will be skeptics. In my younger days, I faced a lot of skepticism when starting most of the projects that ultimately proved to be successful. But this is not to say the skeptics are always wrong. I faced skepticism for most of the unsuccessful projects as well.

As I became more experienced, I found that more and more people would agree with whatever I said, and that was even more worrying. I had to actively seek out people who would challenge me and tell me the truth. Luckily, these days I am surrounded by people who will tell me when they think I’m doing something dumb! 

While skepticism is healthy and even necessary, society has a deep interest in the fruits of innovation. And that is a good reason to approach innovation with optimism. I’d rather side with the optimist who wants to give it a shot and might fail than the pessimist who doubts what’s possible. 

Take responsibility for your work

As we focus on AI as a driver of valuable innovation throughout society, social responsibility is more important than ever. People both inside and outside the field see a wide range of possible harms AI may cause. These include both short-term issues, such as bias and harmful applications of the technology, and long-term risks, such as concentration of power and potentially catastrophic applications. It’s important to have open and intellectually rigorous conversations about them. In that way, we can come to an agreement on what the real risks are and how to reduce them.

Over the past millennium, successive waves of innovation have reduced infant mortality, improved nutrition, boosted literacy, raised standards of living worldwide, and fostered civil rights including protections for women, minorities, and other marginalized groups. Yet innovations have also contributed to climate change, spurred rising inequality, polarized society, and increased loneliness. 

Clearly, the benefits of innovation come with risks, and we have not always managed them wisely. AI is the next wave, and we have an obligation to learn lessons from the past to maximize future benefits for everyone and minimize harm. This will require commitment from both individuals and society at large. 

At the social level, governments are moving to regulate AI. To some innovators, regulation may look like an unnecessary restraint on progress. I see it differently. Regulation helps us avoid mistakes and enables new benefits as we move into an uncertain future. I welcome regulation that calls for more transparency into the opaque workings of large tech companies; this will help us understand their impact and steer them toward achieving broader societal benefits. Moreover, new regulations are needed because many existing ones were written for a pre-AI world. The new regulations should specify the outcomes we want in important areas like health care and finance—and those we do not want. 

Related work from others:  Latest from MIT : Can machine-learning models overcome biased datasets?

But avoiding harm shouldn’t be just a priority for society. It also needs to be a priority for each innovator. As technologists, we have a responsibility to understand the implications of our research and innovate in ways that are beneficial. Traditionally, many technologists adopted the attitude that the shape technology takes is inevitable and there’s nothing we can do about it, so we might as well innovate freely. But we know that’s not true. 

Avoiding harm shouldn’t be just a priority for society. It also needs to be a priority for each innovator. 

When innovators choose to work on differential privacy (which allows AI to learn from data without exposing personally identifying information), they make a powerful statement that privacy matters. That statement helps shape the social norms adopted by public and private institutions. Conversely, when innovators create Web3 cryptographic protocols to launder money, that too creates a powerful statement—in my view, a harmful one—that governments should not be able to trace how funds are transferred and spent. 

If you see something unethical being done, I hope you’ll raise it with your colleagues and supervisors and engage them in constructive conversations. And if you are asked to work on something that you don’t think helps humanity, I hope you’ll actively work to put a stop to it. If you are unable to do so, then consider walking away. At AI Fund, I have killed projects that I assessed to be financially sound but ethically unsound. I urge you to do the same. 

Now, go forth and innovate! If you’re already in the innovation game, keep at it. There’s no telling what great accomplishment lies in your future. If your ideas are in the daydream stage, share them with others and get help to shape them into something practical and successful. Start executing, and find ways to use the power of innovation for good. 

This essay is part of MIT Technology Review’s 2023 Innovators Under 35 package. Meet this year’s honorees.

Andrew Ng is a renowned global AI innovator. He leads AI Fund, DeepLearning.AI, and Landing AI.

Similar Posts