Building fair and transparent systems with artificial intelligence has become an imperative for enterprises. AI can help enterprises create personalized customer experiences, streamline back-office operations from onboarding documents to internal training, prevent fraud, and automate compliance processes. But deploying intricate AI ecosystems with integrity requires good governance standards and metrics.

To deploy and manage the AI lifecycle—encompassing advanced technologies like machine learning (ML), natural language processing, robotics, and cognitive computing—both responsibly and efficiently, firms like JPMorgan Chase employ best practices known as ModelOps.

These best governance practices involve “establishing the right policies and procedures and controls for the development, testing, deployment and ongoing monitoring of AI models so that it ensures the models are developed in compliance with regulatory and ethical standards,” says JPMorgan Chase managing director and general manager of ModelOps, AI and ML Lifecycle Management and Governance, Stephanie Zhang.

Because AI models are driven by data and environment changes, says Zhang, continuous compliance is necessary to ensure that AI deployments meet regulatory requirements and establish clear ownership and accountability. Amidst these vigilant governance efforts to safeguard AI and ML, enterprises can encourage innovation by creating well-defined metrics to monitor AI models, employing widespread education, encouraging all stakeholders’ involvement in AI/ML development, and building integrated systems.

“The key is to establish a culture of responsibility and accountability so that everyone involved in the process understands the importance of this responsible behavior in producing AI solutions and be held accountable for their actions,” says Zhang.

This episode of Business Lab is produced in association with JPMorgan Chase.

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is building and deploying artificial intelligence with a focus on ModelOps, governance, and building transparent and fair systems. As AI becomes more complicated, but also integrated into our daily lives, the need to balance governance and innovation is a priority for enterprises.

Two words for you: good governance.

Today we are talking with Stephanie Zhang, managing director and general manager of ModelOps, AI and ML Lifecycle Management and Governance at JPMorgan Chase.

This podcast is produced in association with JPMorgan Chase.

Welcome Stephanie.

Stephanie Zhang: Thank you for having me, Laurel.

Laurel: Glad to have you here. So, often people think of artificial intelligence as individual technologies or innovations, but could you describe the ecosystem of AI and how it can actually help different parts of the business?

Stephanie: Sure. I’ll start explaining what AI is first. Artificial intelligence is the ability for a computer to think and learn. With AI, computers can do things that traditionally require human intelligence. AI can process large amounts of data in ways that humans cannot. The goal for AI is to be able to do things like recognizing patterns, making decisions, and judging like humans. And AI is not just a single technology or innovation, but rather an ecosystem of different technologies, tools and techniques that work all together to enable intelligent systems and applications. The AI ecosystem includes technology such as machine learning, natural language processing, computer vision, robotics and cognitive computing among others. And finally, software. The business software that makes the decisions based on the predictive answers out of the models.

Laurel: That’s a really great way to set the context for using AI in the enterprise. So how does artificial intelligence help JPMorgan Chase build better products and services?

Stephanie: At JPMorgan Chase, our purpose is to make dreams possible for everyone, everywhere and every day. So we aim to be the most respected financial services firm in the world, serving corporations and individuals with exceptional client service, operational excellence, a commitment to integrity, fairness, responsibility, and we’re a great place to work with a winning culture. Now, all of these things I have mentioned from the previous questions that you have asked, AI can contribute towards that. So specifically, well first of all, AI actually is involved in making better products and services from the back office to the front customer-facing applications. There’s some examples here. For example, I mentioned earlier improved customer experience. So we use AI to personalize customer experience.

Second part is streamlined operations. So, behind the scenes a lot of the AI applications are in the spaces of streamlining our operations, and those range from client onboarding documents, training our AI assisted agent and helping us internally training all of those things. Third, fraud detection and prevention. As a financial services company, we have a lot of folks in the office, and that helps in terms of cybersecurity, in terms of credit card fraud detection and prevention, many of which are done through a large amount of data analyzed detecting anomaly situations. And then last but not least, trading and investment. It helps our investment managers by providing more information, bringing information in an efficient manner, and helps recommend certain information and things to look at. Compliance as well. AI power tools can also help financial services firms such as ours to comply with regulatory requirements by automating these compliance processes.

Laurel: That’s a great explanation, Stephanie. So more specifically, what is ModelOps and how is it used with AI and then to help the firm innovate?

Stephanie: ModelOps is a set of best practices and tools used to manage the overall lifecycle of AI and machine learning models in the production environment. Specifically, it’s focused more on the governance side of things, but from an end-to-end lifecycle management, both from the very beginning of when you wanted to approach an AI/ML project, the intention of the project and outcome that you desire to have to the model development, to how you process the data, to how you deploy the model, and ongoing monitoring of the model to see if the model’s performance is still as intended. It’s a structured approach to managing the entire lifecycle of AI models.

Laurel: There’s certainly quite a bit to consider here. So specifically, how does that governance that you mentioned earlier play into the development of artificial intelligence across JPMorgan Chase and the tools and services being built?

Related work from others:  Latest from MIT : Three Spanish MIT physics postdocs receive Botton Foundation fellowships

Stephanie: So, the governance program that we are developing surrounding AI/ML not only ensures that the AI/ML models are developed in a responsible manner, in a trustworthy manner, but also increases efficiency and innovation in this space. The effective governance would ensure that the models are developed in the right way and deployed in the responsible way as well. Specifically, it involves establishing the right policies and procedures and controls for the development, testing, deployment and ongoing monitoring of AI models so that it ensures the models are developed in compliance with regulatory and ethical standards and also how we handle data. And then on top of that, continuously monitored and updated to reflect the changes in the environment.

Laurel: So as a subset of governance, what role does continuous compliance then play in the process of governance?

Stephanie: Continuous compliance is an important part of governance in the deployment of AI models. It involves ongoing monitoring and validation of AI models to ensure that they’re compliant with regulatory and ethical standards as well as use case objectives in the organization’s internal policies and procedures. We all know that AI model development is not like code software development where if you don’t change the code, nothing really changes, but AI models are driven by data. So the data and environment change, it requires us to constantly monitor the model’s performance to ensure the model is not drifting out of what we intended. So the continuous compliance requires that AI models are constantly being monitored and updated to reflect the changes that we observe in the environment to ensure that it still complies to the regulatory requirements. As we know, more and more regulatory rules are coming across the world in the space of using data and using AI.

And this can be achieved through model monitoring tools, capturing data in real time, providing the alert when the model is out of compliance, and then alerting the developers to do the changes as it requires. But one of the other important things is not detecting the changes through the monitoring, but also to establish clear ownership and accountability on the compliance. And this can be gone through the established responsibility matrix with governance or oversight boards that’s constantly reviewing these models. And it involves also independent validation of how the model is built and how the model is deployed. So in summary, continuous compliance plays a really important role in the governance of AI models.

Laurel: That’s great. Thank you for that detailed explanation. So because you personally specialize in governance, how can enterprises balance both providing safeguards for artificial intelligence and machine learning deployment, but still encourage innovation?

Stephanie: So balancing safeguards for AI/ML deployment and encouraging innovation can be really challenging tasks for the enterprises. It’s large scale, and it’s changing extremely fast. However, this is critically important to have that balance. Otherwise, what is the point of having the innovation here? There are a few key strategies that can help achieve this balance. Number one, establish clear governance policies and procedures, review and update existing policies where it may not suit AI/ML development and deployment at new policies and procedures that’s needed, such as monitoring and continuous compliance as I mentioned earlier. Second, involve all the stakeholders in the AI/ML development process. We start from data engineers, the business, the data scientists, also ML engineers who deploy the models in production. Model reviewers. Business stakeholders and risk organizations. And that’s what we are focusing on. We’re building integrated systems that provide transparency, automation and good user experience from beginning to end.

So all of this will help with streamlining the process and bringing everyone together. Third, we needed to build systems not only allowing this overall workflow, but also captures the data that enables automation. Oftentimes many of the activities happening in the ML lifecycle process are done through different tools because they reside from different groups and departments. And that results in participants manually sharing information, reviewing, and signing off. So having an integrated system is critical. Four, monitoring and evaluating the performance of AI/ML models, as I mentioned earlier on, is really important because if we don’t monitor the models, it will actually have a negative effect from its original intent. And doing this manually will stifle innovation. Model deployment requires automation, so having that is key in order to allow your models to be developed and deployed in the production environment, actually operating. It’s reproducible, it’s operating in production.

It’s very, very important. And having well-defined metrics to monitor the models, and that involves infrastructure model performance itself as well as data. Finally, providing training and education, because it’s a group sport, everyone comes from different backgrounds and plays a different role. Having that cross understanding of the entire lifecycle process is really important. And having the education of understanding what is the right data to use and are we using the data correctly for the use cases will prevent us from much later on rejection of the model deployment. So, all of these I think are key to balance out the governance and innovation.

Laurel: So there’s another topic here to be discussed, and you touched on it in your answer, which was, how does everyone understand the AI process? Could you describe the role of transparency in the AI/ML lifecycle from creation to governance to implementation?

Stephanie: Sure. So AI/ML, it’s still fairly new, it’s still evolving, but in general, people have settled in a high-level process flow that is defining the business problem, acquiring the data and processing the data to solve the problem, and then build the model, which is model development and then model deployment. But prior to the deployment, we do a review in our company to ensure the models are developed according to the right responsible AI principles, and then ongoing monitoring. When people talk about the role of transparency, it’s about not only the ability to capture all the metadata artifacts across the entire lifecycle, the lifecycle events, all this metadata needs to be transparent with the timestamp so that people can know what happened. And that’s how we shared the information. And having this transparency is so important because it builds trust, it ensures fairness. We need to make sure that the right data is used, and it facilitates explainability.

Related work from others:  Latest from MIT : Using AI, scientists find a drug that could combat drug-resistant infections

There’s this thing about models that needs to be explained. How does it make decisions? And then it helps support the ongoing monitoring, and it can be done in different means. The one thing that we stress very much from the beginning is understanding what is the AI initiative’s goals, the use case goal, and what is the intended data use? We review that. How did you process the data? What’s the data lineage and the transformation process? What algorithms are being used, and what are the ensemble algorithms that are being used? And the model specification needs to be documented and spelled out. What is the limitation of when the model should be used and when it should not be used? Explainability, auditability, can we actually track how this model is produced all the way through the model lineage itself? And also, technology specifics such as infrastructure, the containers in which it’s involved, because this actually impacts the model performance, where it’s deployed, which business application is actually consuming the output prediction out of the model, and who can access the decisions from the model. So, all of these are part of the transparency subject.

Laurel: Yeah, that’s quite extensive. So considering that AI is a fast-changing field with many emerging tech technologies like generative AI, how do teams at JPMorgan Chase keep abreast of these new inventions while then also choosing when and where to deploy them?

Stephanie: The speed of innovation in the technology field is just growing so exponentially fast. Of course, AI technology is still emerging, and it is truly a challenging task. However, there’s a few things that we can do, and we are doing, to help the teams to keep abreast of these new innovations. One, we build a strong internal knowledge base. We have a lot of talent in JPMorgan Chase, and the team will continue to build their knowledge base and different teams evaluate different technologies, and they share their minds. And we attend conferences, webinars, and industry events, so that’s really important. Second, we engage with industry experts, thought leaders and vendors.

Oftentimes, startups have the brightest ideas as to what to do with the latest technology? And we also are very much involved in educational institutes and researchers as well. Those help us learn about the newest developments in the field. And then the third thing is that we do a lot of pilot project POCs [proof of concepts]. We have hackathons in the firm. And so JPMorgan Chase is a place where employees and everyone from all roles are encouraged to come up with innovative ideas. And the fourth thing is we have a lot of cross-functioning teams that collaborate. Collaboration is where innovation truly emerges. That’s where new ideas and new ways of approaching solving an existing problem happen, and different minds start thinking about problems from different angles. So those are all the amazing things that we benefit from each other.

Laurel: So this is a really great conversation because although you’re saying technology is obviously at the crux of what you do, people also play a large part in developing and deploying AI and ML models. So, then how do you go about ensuring people that develop the models and manage the data operate responsibly?

Stephanie: This is a topic I’m very passionate about because first and foremost, I think having a diverse team is always the winning strategy. And particularly in an AI/ML world, we are using data to solve problems and understanding that bias and being conscious about those things so getting in the trap of unintentionally using data in the wrong way is important. So, what that means is that there are several ways to promote responsible behaviors because models are built by people. One, we do establish clear policies and guidelines. Financial services firms tend to have strong risk management. So, we’re very strong in that sense. However, with the emerging field of AI/ML, we are increasing that type of policies and guidelines. And, two, very important is providing training and education. Oftentimes as a data scientist, people are more focused on technology. They’re focused on building a model with the best performing scores, the best accuracy, and perhaps are not so well versed in terms of, am I using the right data? Should I be using this data?

All of those things, we need to have continued education on that so that people know how to build models responsibly. Then we wanted to foster a culture of responsibility. And within JPMorgan Chase, there’s various groups that have already spawned up to talk about this. Responsible AI, ethical AI are major topics here in our firm. And data privacy, ethics, all of these are topics not only in our training classes as well as in various employee groups. Ensuring transparency. So, this is where the transparency is important. If people don’t know what they’re doing and having a different group be able to monitor and review the models being produced, they may not learn what is the right way of doing it.

The key is to establish a culture of responsibility and accountability so that everyone involved in the process understands the importance of this responsible behavior in producing AI solutions and be held accountable for their actions.

Laurel: So, a quick followup to that important people aspect of artificial intelligence. What are some best practices JPMorgan Chase employs to ensure that diversity is being taken into account when both hiring new employees as well as building and then deploying those AI models?

Stephanie: So, JPMorgan Chase is present in over a hundred markets around the globe, right? We’re actively seeking out diverse candidates throughout the world, and 49% of our global hires are women. And 58% of the new US hires are ethnically diverse. So we have been at the forefront and continue to hire diversely. So, ensuring diverse hiring practices is very important. Second, we need to create diverse teams as well. So diverse teams, that includes individuals with diverse backgrounds from diverse fields, not just computer science and AI/ML, but sociology, other fields are also important, and they all bring rich perspectives and creative problem-solving techniques.

Related work from others:  Latest from MIT : In MIT visit, Dropbox CEO Drew Houston ’05 explores the accelerated shift to distributed work

And the other thing, again, I’m going back to this, which is monitoring and auditing AI models for bias. So, not all the AI models require bias monitoring. We tier the models depending on the use of the models, those do need to get evaluated for it, and, therefore, it’s very, very important to follow the risk management framework and identify potential issues before they become significant problems. And then ensuring the bias in data and bias in terms of the model development are being detected and through sufficient amounts of test. And, finally, fostering a culture of inclusivity. So, creating a culture of inclusivity that values diversity and encourages different perspectives can help how we develop the models. So, we hire diverse candidates, we form teams that are diverse, but also we need to constantly reinforce this culture of DEI. So that includes establishing training programs, promoting communication amongst the communities of AI/ML folks.

We talk about how we produce models and how we develop models, and what are those things that we should be looking out for. So, promoting diversity and inclusion in the development and the deployment of AI models requires ongoing effort and continuous improvement, and it’s really important to ensure that diverse viewpoints are represented throughout the whole process.

Laurel: This has been a really great discussion, Stephanie, but one last question. Much of this technology seems to be emerging so quickly, but how do you envision the future of ModelOps in the next five years?

Stephanie: So, over the last few years, the industry has matured from model development to full AI lifecycle management, and now we see technology has evolved from ML platform towards the AI ecosystem from just making ML work to responsible AI. So, in the near future, what I see for ModelsOps is expected to continue to evolve and become more and more sophisticated as organizations increasingly adopt AI and machine learning technology. And several of the key trends that I’ve seen that’s likely to shape the future of ModelOps include increased automation. As the volume and complexity of AI models continue to grow, automation will become increasingly important in managing the entire model lifecycle. We just can’t catch up if we don’t automate. So from development to deployment and monitoring, this requires a development of much more advanced tools and platforms that can automate many of the tasks currently mostly still performed by human operators.

Second thing is a greater focus on explainability and interpretability. As AI models become more complex and are used to make more important decisions, there will be increased focus on ensuring that models are explainable and interpretable so that the stakeholders can understand how decisions are made. This will require the development of new techniques and tools for model interpretability. Third, integration with devOps. As I mentioned earlier, just making model ML work is no longer enough. Many models being trained are now getting into the production environment. So ModelOps will continue to integrate with devOps enabling the organization to manage both the software and AI models in a very unified manner. And this will require the development of new tools and platforms to enable the seamless integration from the AI model development and deployment with the software development and deployment.

And then the increased use of cloud-based services. As more organizations move their operations to the cloud, there will be increased use of cloud-based services for AI model development and deployment. And this will require new tools, again, to integrate seamlessly with cloud-based infrastructure. So the future of ModelOps is likely to be definitely more automation, increased focus on the explainability and interpretability and tighter integration with devOps and increased use of cloud.

Laurel: Well, thank you very much, Stephanie, for what has been a fantastic episode of the Business Lab.

Stephanie: My pleasure. Thank you for having me.

Laurel: That was Stephanie Zhang, the managing director and general manager of ModelOps, AI and ML lifecycle management and governance at JPMorgan Chase, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can also find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This podcast is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

Similar Posts