Summary

In this code pattern, gain better insights and explainability by learning how to use the AI 360 Explainability Toolkits to demystify the decisions that are made by a machine learning model. This not only helps policymakers and data scientists to develop trusted explainable AI applications, but also helps with transparency for everyone. To demonstrate the use of the AI Explainability 360 Toolkit, we use the existing fraud detection code pattern explaining the AIX360 algorithms.

Description

Imagine a scenario in which you visit a bank where you want to take out a $1M loan. The loan officer uses an AI-powered system that predicts or recommends if you are eligible for a loan and how much that loan can be. In this example, the AI system recommends that you are not eligible for a loan. So, you might have few questions you then need to think about:

Will you as a customer be satisfied with the service?
Would you want justification for the decision made by the AI system?
Should the loan officer double-check the decision made by the AI system, and would you want them to know the underlying mechanism of the AI model?
Should the bank completely trust and rely on the AI-powered system?

You might agree that it’s not enough to just make predictions. Sometimes, you must have a deep understanding of why the decision was made. There are many reasons why you need to understand the underlying mechanism of the machine learning models. These include:

Human readability
Bias mitigation
Justifiability
Interpretability
Fostering trust and confidence in AI systems

Related work from others:  Latest from MIT Tech Review - Humans may be more likely to believe disinformation generated by AI

In this code pattern, we demonstrate how the three explainability algorithms work:

The Contrastive Explanations Method (CEM) algorithm that is available in the AI Explainability 360 Toolkit.
The AI Explainability 360—ProtoDash works with an existing predictive model to show how the customer compares to others who have similar profiles and had similar repayment records to the model’s prediction for the current customer. This helps to evaluate and predict the applicant’s risk. Based on the model’s prediction and the explanation for how it came to that recommendation, the loan officer can make a more informed decision.
The Generalized Linear Rule Model (GLRM) algorithm in the AI Explainability 360 Toolkit provides an enhanced level of explainability to a data scientist whether the model can be deployed.

Flow

Log in to IBM Watson® Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
Upload the .csv data file to IBM Cloud Object Storage.
Load the data file in the Watson Studio notebook.
Install the AI Explainability 360 Toolkit and the Adversarial Robustness Toolbox in the Watson Studio notebook.
Get visualization for explainability and interpretability of the AI model for the three different types of users.

Instructions

Find the detailed steps in the README file. Those steps explain how to:

Create an account with IBM Cloud.
Create a new Watson Studio project.
Add data.
Create the notebook.
Insert the data as DataFrame.
Run the notebook.
Analyze the results.

This code pattern is part of the The AI 360 Toolkit: AI models explained use case series, which helps stakeholders and developers to understand the AI model lifecycle completely and to help them make informed decisions.

Related work from others:  Latest from Google AI - Larger language models do in-context learning differently

Similar Posts