How do you remove bias from the machine learning models and ensure that the predictions are fair? How can you adopt responsible AI? How do you build trusted AI? And, what are the three stages in which the bias mitigation solution can be applied? This code pattern answers these questions and more to help developers, data scientists, and stakeholders make informed decisions by consuming the results of predictive models.
Fairness in data and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure that they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine learning model, fairness gives you a way to understand the practical implications of deploying the model in a real-world situation.
In this code pattern, you use a fraud data set to predict fraudulent transactions to reduce monetary loss and mitigate risks. You learn to address three of the pillars of building trustworthy AI pipelines (Fairness, Explainability, and Robustness of the predictive models), and enhance the effectiveness of the AI predictive system.
After completing this code pattern, you understand how to:
Create a project using IBM Watson Studio
Use the AI Fairness 360 Toolkit
Log in to IBM Watson Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
Upload the .csv data file to IBM Cloud Object Storage.
Load the data file in the Watson Studio notebook.
Install the AI Fairness 360 Toolkit in the Watson Studio notebook.
Analyze the results after applying the bias mitigation algorithm during pre-processing, in-processing, and post-processing stages.
Find the detailed steps for this pattern in the readme file. The steps show you how to:
Create an account with IBM Cloud.
Create a new Watson Studio project.
Create the notebook.
Insert the data as DataFrame.
Run the notebook.
Analyze the results.
This code pattern is part of the The AI 360 Toolkit: AI models explained use case series, which helps stakeholders and developers to understand the AI model lifecycle completely and to help them make informed decisions.