Skip to main content

Measuring Bias in Machine Learning: The Statistical Bias Test

This tutorial will define statistical bias in a machine learning model and demonstrate how to perform the test on synthetic data.
May 5, 2020  · 6 min read

This article was written by Sarah Khatry and Haniyeh Mahmoudian, data scientists at DataRobot.

The question of bias in machine learning models has been the subject of a lot of attention in recent years. Stories of models going wrong make headlines, and humanitarian lawyers, politicians, and journalists have all contributed to the conversation about what ethics and values we want to be reflected in the models we build.

While human bias is a thorny issue and not always easily defined, bias in machine learning is, at the end of the day, mathematical. There are many different types of tests that you can perform on your model to identify different types of bias in its predictions. Which test to perform depends mostly on what you care about and the context in which the model is used.

One of the most broadly applicable tests out there is statistical parity, which this hands-on tutorial will walk through. Now, bias is always assessed relative to different groups of people identified by a protected attribute in your data, e.g., race, gender, age, sexuality, nationality, etc.

With statistical parity, your goal is to measure if the different groups have equal probability of achieving a favorable outcome. A classic example is a hiring model, for which you would like to ensure that male and female applicants have an equal probability of being hired. In a biased model, you will instead identify that one group is privileged with a higher probability of being hired, while the other group is underprivileged.

To demonstrate how this works in practice, we’ll first construct synthetic data with bias we’ve predefined, then confirm via analysis that the data reflects the situation we intended, and finally apply the statistical parity test.

Generating Synthetic Data

In this tutorial, we'll be using the pandas package in Python, but every step in this process can also be reproduced in R.

import pandas as pd

To generate synthetic data with one protected attribute and model predictions, we first need to specify a few inputs: the total number of records, the protected attribute itself (here two generic values, A and B), and the model prediction that is associated with the favorable outcome, in this example the value 1.

num_row = 1000 # number of rows in the data
prot_att = 'prot_att' # column name for protected attribute
model_pred_bin = 'prediction' # column name for predictions
pos_label = 1 # prediction value associated to positive/favorable outcome
prot_group = ['A','B'] # two groups in our protected attribute

As in real life, groups A and B may not be evenly distributed in our data. In the below code, we have decided that 60% of the population in our data will be from privileged group B, who have a 30% chance of receiving the favorable outcome. Unprivileged group A will make up the remaining 40% of the data and have only a 15% probability for the favorable outcome.

priv_g = 'B'  # privileged group
priv_g_rate = 60 # 60% of the population is from group B
priv_p_rate = 30 # 30% of the predictions for group B was for favorable outcome 1
unpriv_p_rate = 15 # 15% of the predictions for group A was for favorable outcome 1
biased_list_priv = [prot_group[0]] * (100 - priv_g_rate) + [prot_group[1]] * priv_g_rate
biased_list_pos_priv = [0] * (100 - priv_p_rate) + [1] * priv_p_rate
biased_list_pos_unpriv = [0] * (100 - unpriv_p_rate) + [1] * unpriv_p_rate

For each record of the data, we randomly assign a protected group and a prediction, using the bias we specified before as weights, and then create a dataframe from the list of records.

list_df = [] # empty list to store the synthetic records
for i in range(0, num_row):
   # generating random value representing protected groups with bias towards B
   prot_rand = random.choices(biased_list_priv)[0]
   if prot_rand == priv_g:
       prot_g = priv_g
       # generating random binary value representing prediction with bias towards 0
       pred = random.choices(biased_list_pos_priv)[0]
       # adding the new record to the list
       list_df.append([prot_g,pred])
   else:
       prot_g = prot_group[0]
       pred = random.choices(biased_list_pos_unpriv)[0]
       list_df.append([prot_g,pred])
# create a dataframe from the list
df = pd.DataFrame(list_df,columns=['prot_att','prediction'])

Interpreting the Data

Now that we have our synthetic data, let’s analyze what we’ve built. For each group, A versus B, what is their class probability of achieving the favorable or unfavorable outcome?

df_group = (df.groupby([prot_att])[model_pred_bin].value_counts() / df.groupby([prot_att])[
   model_pred_bin].count()).reset_index(name='probability')
print(df_group)
  prot_att  prediction  probability
0        A           0     0.849490
1        A           1     0.150510
2        B           0     0.713816
3        B           1     0.286184

Inspecting the table, it’s not hard for us to see that group B has an almost double likelihood of achieving the favorable outcome, with a probability of 28.6%. Our synthetic data was designed to have a probability of 30%, so we’re close to the mark. We then save the probabilities in a dictionary.

Since it’s randomly generated, your code might give a slightly different result.

prot_att_dic = {}
for att in prot_group:
   temp = df_group[(df_group[prot_att] == att) & (df_group[model_pred_bin] == pos_label)]
   prob = float(temp['probability'])
   prot_att_dic[att] = prob

The Statistical Parity Test

For each group, statistical parity outputs the ratio of their probability of achieving the favorable outcome compared to the privileged group’s probability of achieving the favorable outcome. We iterate over the dictionary of each protected group with their probability of a favorable outcome to construct the ratios.

prot_test_res = {}
for k in prot_att_dic.keys():
   res = float(prot_att_dic[k] / prot_att_dic[priv_g])
   prot_test_res[k] = res
for key in prot_test_res.keys():
   value = prot_test_res[key]
   print(key, ' : ', value)
A  :  0.5259207131128314
B  :  1.0

For the privileged group, B, the statistical parity score is 1, as it should be. For the other group, A, their score is 0.526, which indicates that they have roughly half the likelihood of achieving the favorable outcome as group B.

The statistical bias test provides a simple assessment of how different the predicted outcomes may be for select groups in your data. The goal of measuring bias is two-fold. On the one hand, this test results in a transparent metric, making it easier and more concrete to communicate. But ideally, identifying bias is the first step in beginning to mitigate it in your model. This is a hot area of research in machine learning, with many techniques being developed to accommodate different kinds of bias and modelling approaches.

With the right combination of testing and mitigation techniques, it becomes possible to iteratively improve your model, reduce bias, and preserve accuracy. You can design machine learning systems to not just learn from historical outcomes, but reflect your values in future decision-making.

Find out how to build AI you can trust with DataRobot.

Topics
Related

blog

Understanding and Mitigating Bias in Large Language Models (LLMs)

Dive into a comprehensive walk-through on understanding bias in LLMs, the impact it causes, and how to mitigate it to ensure trust and fairness.
Nisha Arya Ahmed's photo

Nisha Arya Ahmed

12 min

blog

How to Ethically Use Machine Learning to Drive Decisions

Having good quality data requires strong data foundations, along with a commitment to monitoring models and removing bias.
Joyce Chiu's photo

Joyce Chiu

3 min

blog

What is Algorithmic Bias?

Algorithmic bias results in unfair outcomes due to skewed or limited input data, unfair algorithms, or exclusionary practices during AI development.
Abid Ali Awan's photo

Abid Ali Awan

5 min

tutorial

Hypothesis Testing in Machine Learning

In this tutorial, you'll learn about the basics of Hypothesis Testing and its relevance in Machine Learning.
Nishant Singh's photo

Nishant Singh

4 min

tutorial

An Introduction to Statistical Machine Learning

Discover the powerful fusion of statistics and machine learning. Explore how statistical techniques underpin machine learning models, enabling data-driven decision-making.
Joanne Xiong's photo

Joanne Xiong

11 min

tutorial

Machine Learning Experimentation: An Introduction to Weights & Biases

Learn how to structure, log, and analyze your machine learning experiments using Weights & Biases.
George Boorman's photo

George Boorman

9 min

See MoreSee More