By John P. Desmond, AI Trends Editor  

Advancing trustworthy AI and machine learning to mitigate agency risk is a priority for the US Department of Energy (DOE), and identifying best practices for implementing AI at scale is a priority for the US General Services Administration (GSA).  

That’s what attendees learned in two sessions at the AI World Government live and virtual event held in Alexandria, Va. last week.   

Pamela Isom, Director of the AI and Technology Office, DOE

Pamela Isom, Director of the AI and Technology Office at the DOE, who spoke on Advancing Trustworthy AI and ML Techniques for Mitigating Agency Risks, has been involved in proliferating the use of AI across the agency for several years. With an emphasis on applied AI and data science, she oversees risk mitigation policies and standards and has been involved with applying AI to save lives, fight fraud, and strengthen the cybersecurity infrastructure.  

She emphasized the need for the AI project effort to be part of a strategic portfolio. “My office is there to drive a holistic view on AI and to mitigate risk by bringing us together to address challenges,” she said. The effort is assisted by the DOE’s AI and Technology Office, which is focused on transforming the DOE into a world-leading AI enterprise by accelerating research, development, delivery and the adoption of AI.  

“I am telling my organization to be mindful of the fact that you can have tons and tons of data, but it might not be representative,” she said. Her team looks at examples from international partners, industry, academia and other agencies for outcomes “we can trust” from systems incorporating AI.  

“We know that AI is disruptive, in trying to do what humans do and do it better,” she said. “It is beyond human capability; it goes beyond data in spreadsheets; it can tell me what I’m going to do next before I contemplate it myself. It’s that powerful,” she said.  

As a result, close attention must be paid to data sources. “AI is vital to the economy and our national security. We need precision; we need algorithms we can trust; we need accuracy. We don’t need biases,” Isom said, adding, “And don’t forget that you need to monitor the output of the models long after they have been deployed.”   

Executive Orders Guide GSA AI Work 

Executive Order 14028, a detailed set of actions to address the cybersecurity of government agencies, issued in May of this year, and Executive Order 13960, promoting the use of trustworthy AI in the Federal government, issued in December 2020, provide valuable guides to her work.   

To help manage the risk of AI development and deployment, Isom has produced the AI Risk Management Playbook, which provides guidance around system features and mitigation techniques. It also has a filter for ethical and trustworthy principles which are considered throughout AI lifecycle stages and risk types. Plus, the playbook ties to relevant Executive Orders.  

And it provides examples, such as your results came in at 80% accuracy, but you wanted 90%. “Something is wrong there,” Isom said, adding, “The playbook helps you look at these types of problems and what you can do to mitigate risk, and what factors you should weigh as you design and build your project.”  

While internal to DOE at present, the agency is looking into next steps for an external version. “We will share it with other federal agencies soon,” she said.   

GSA Best Practices for Scaling AI Projects Outlined  

Anil Chaudhry, Director of Federal AI Implementations, AI Center of Excellence (CoE), GSA

Related work from others:  Latest from Google AI - Responsible AI at Google Research: The Impact Lab

Anil Chaudhry, Director of Federal AI Implementations for the AI Center of Excellence (CoE) of the GSA, who spoke on Best Practices for Implementing AI at Scale, has over 20 years of experience in technology delivery, operations and program management in the defense, intelligence and national security sectors.   

The mission of the CoE is to accelerate technology modernization across the government, improve the public experience and increase operational efficiency. “Our business model is to partner with industry subject matter experts to solve problems,” Chaudhry said, adding, “We are not in the business of recreating industry solutions and duplicating them.”   

The CoE is providing recommendations to partner agencies and working with them to implement AI systems as the federal government engages heavily in AI development. “For AI, the government landscape is vast. Every federal agency has some sort of AI project going on right now,” he said, and the maturity of AI experience varies widely across agencies.  

Typical use cases he is seeing include having AI focus on increasing speed and efficiency, on cost savings and cost avoidance, on improved response time and increased quality and compliance. As one best practice, he recommended the agencies vet their commercial experience with the large datasets they will encounter in government.   

“We’re talking petabytes and exabytes here, of structured and unstructured data,” Chaudhry said. [Ed. Note: A petabyte is 1,000 terabytes.] “Also ask industry partners about their strategies and processes on how they do macro and micro trend analysis, and what their experience has been in the deployment of bots such as in Robotic Process Automation, and how they demonstrate sustainability as a result of drift of data.”   

He also asks potential industry partners to describe the AI talent on their team or what talent they can access. If the company is weak on AI talent, Chaudhry would ask, “If you buy something, how will you know you got what you wanted when you have no way of evaluating it?”  

He added, “A best practice in implementing AI is defining how you train your workforce to leverage AI tools, techniques and practices, and to define how you grow and mature your workforce. Access to talent leads to either success or failure in AI projects, especially when it comes to scaling a pilot up to a fully deployed system.”  

In another best practice, Chaudhry recommended examining the industry partner’s access to financial capital. “AI is a field where the flow of capital is highly volatile. “You cannot predict or project that you will spend X amount of dollars this year to get where you want to be,” he said, because an AI development team may need to explore another hypothesis, or clean up some data that may not be transparent or is potentially biased. “If you don’t have access to funding, it is a risk your project will fail,” he said.  

Another best practice is access to logistical capital, such as the data  that sensors collect for an AI IoT system. “AI requires an enormous amount of data that is authoritative and timely. Direct access to that data is critical,” Chaudhry said. He recommended that data sharing agreements  be in place with organizations relevant to the AI system. “You might not need it right away, but having access to the data, so you could immediately use it and to have thought through the privacy issues before you need the data, is a good practice for scaling AI programs,” he said.   

A final best practice is planning of physical infrastructure, such as data center space. “When you are in a pilot, you need to know how much capacity you need to reserve at your data center, and how many end points you need to manage” when the application scales up, Chaudhry said, adding, “This all ties back to access to capital and all the other best practices.“ 

Learn more at AI World Government. 

Similar Posts