The first director of the White House’s National Artificial Intelligence Initiative Office, Lynne Parker, has just stepped down. The NAIIO launched in January 2021 to coordinate the different federal agencies that work on artificial-intelligence initiatives, with the goal of advancing US development of AI. 

Its goals are to ensure that the US is a leader in AI research and development, particularly in the development and use of trustworthy AI, and to prepare the US workforce with better education and training. 

As its first director, Parker oversaw the creation of a national AI R&D strategic plan, a national AI research institute, and an AI portal to help researchers apply for funding. and conducted research in ways to measure and evaluate AI. 

She is now returning to her role in academia as the director of AI initiatives at the University of Tennessee, Knoxville.

We spoke to her about the office’s accomplishments and the major challenges ahead for AI in the US. The conversation has been condensed and lightly edited for clarity.

What has been the NAIIO’s biggest accomplishment so far?

The National AI Initiative covers so much territory: R&D, governance aspects of the use of AI, education and workforce training, international collaboration, and the use of AI within the federal government. That’s a lot of activities.

The NAIIO has helped to better structure that, and it’s been able to put in place a number of communication channels, and ways to prioritize and coordinate what we’re doing in all of those areas, so that we can make efficient and effective progress.

What’s the most important challenge it needs to tackle in the future?

In the R&D space, I think the challenge will be to make sure that we’re continuing to invest in high-quality, long-term research that has impactful outcomes, so that we can build up the next generation of AI that will give us benefit down the road.

Related work from others:  Latest from Google AI - StyleDrop: Text-to-image generation in any style

For the development and use of trustworthy AI, the challenge is how we actually implement many of the fundamental principles.

For education and workforce—AI, in some sense, is becoming the new math. But not everyone needs advanced calculus, for instance; many just need to know algebra. And so it’s the same in the AI space. Many need to know the basic concepts and capabilities of AI at just a conceptual level, and others need to be able to be experts and program and develop new machine-learning algorithms. Coming up with education and training opportunities for different people through all walks of life and in all types of jobs is a challenge.

Which aspects of the NAIIO’s remit have been easier to make progress on? Which have been harder?

Part of this may reflect my own background, but I think R&D has been easier because it’s more structured … At the end of the day, funding is often what it boils down to in R&D, and I think we have done a very good job of prioritizing and funding AI R&D. 

In terms of a pillar that’s more challenging, I’ll come back to education and the workforce, because there’s so many different kinds of needs. And because K-12 education is managed by the states—it’s not a single approach for the entirety of the country—there’s a long-standing challenge there of how do we build up that capacity? How do we create curricula that people across the country can use? 

The lack of sufficient talent in the AI sphere, or just sufficient understanding of what AI is by all of our people, is a long-standing challenge. We’ve recognized that for many years as it relates to STEM areas in general. But we do have a bit of a cultural challenge in terms of people thinking that the field is hard, or it’s geeky, or something like that. And so not as many people will enter the field. 

Related work from others:  Latest from MIT Tech Review - DeepMind’s AI can control superheated plasma inside a fusion reactor 

We don’t currently have enough people to teach these fields. Many experts are leaving academia and going to industry. And it’s great that we have a thriving industry in this country in this space, but when we don’t have enough educators that can train the next generation, then that exacerbates the problem. So this is a very tough pillar in my mind, but it’s one that we really have to prioritize and continue to make progress in.

The EU is working on legislation to regulate AI. Should the US adopt any of the same measures? 

One area of clear commonality is understanding AI implications and the need for regulation through the lens of risk. Taking a sector-based approach of evaluating risk is something that we at a high level agree on. The National Institute of Standards and Technology (NIST) is making important contributions in this space, in the development of the AI risk management framework

They’re making good progress on this and anticipate having that framework out by the beginning of 2023. There are some nuances here—different people interpret risk differently, so it’s important to come to a common understanding of what risk is and what appropriate approaches to risk mitigation might be, and what potential harms might be.

You’ve talked about the issue of bias in AI. Are there ways that the government can use regulation to help solve that problem? 

There are both regulatory and nonregulatory ways to help. There are a lot of existing laws that already prohibit the use of any kind of system that’s discriminatory, and that would include AI. A good approach is to see how existing law already applies, and then clarify it specifically for AI and determine where the gaps are. 

NIST came out with a report earlier this year on bias in AI. They mentioned a number of approaches that should be considered as it relates to governing in these areas, but a lot of it has to do with best practices. So it’s things like making sure that we’re constantly monitoring the systems, or that we provide opportunities for recourse if people believe that they’ve been harmed. 

Related work from others:  Latest from Google AI - In search of a generalizable method for source-free domain adaptation

It’s making sure that we’re documenting the ways that these systems are trained, and on what data, so that we can make sure that we understand where bias could be creeping in. It’s also about accountability, and making sure that the developers and the users, the implementers of these systems, are accountable when these systems are not developed or used appropriately.

What do you think is the right balance between public and private development of AI? 

The private sector is investing significantly more than the federal government into AI R&D. But the nature of that investment is quite different. The investment that’s happening in the private sector is very much into products or services, whereas the federal government is investing in long-term, cutting-edge research that doesn’t necessarily have a market driver for investment but does potentially open the door to brand-new ways of doing AI. So on the R&D side, it’s very important for the federal government to invest in those areas that don’t have that industry-driving reason to invest. 

Industry can partner with the federal government to help identify what some of those real-world challenges are. That would be fruitful for US federal investment. 

There is so much that the government and industry can learn from each other. The government can learn about best practices or lessons learned that industry has developed for their own companies, and the government can focus on the appropriate guardrails that are needed for AI.

Similar Posts