fbpx
Machine Learning Model Fairness in Practice Machine Learning Model Fairness in Practice
Editor’s Note: See Jakub’s talk about Machine Learning “Model Fairness in Practice” at ODSC West 2019 In the last few years,... Machine Learning Model Fairness in Practice

Editor’s Note: See Jakub’s talk about Machine Learning “Model Fairness in Practice” at ODSC West 2019

In the last few years, the interest around fairness in machine learning has been gaining a lot of momentum. Rightfully so: our models are becoming more and more prevalent in our daily lives, and their impact on the society at large is rapidly increasing. I believe that today more than ever, it is crucial to make sure that the models we develop treat us, humans, fairly. 

[Related Article: Making Fairness an Intrinsic Part of Machine Learning]

Machine Learning model fairness

Taken from Moritz Hardt lecture notes.

Model fairness is a very complex subject involving ethical, legal, and even philosophical considerations but from the machine learning practitioner point of view you probably want to know:  

  • How you should measure fairness? 
  • What can you do to reduce unfair bias in your models?
  • What are the tools that can help you incorporate fairness evaluation and de-biasing in your daily work?

In this blog post I will try and answer those questions!

How to measure fairness?

There are many ways to measure fairness and it varies from problem to problem and human to human. Ultimately you have to decide which metric is appropriate for your use case. Most commonly used fairness metrics are:

  • Disparate Impact: ratio of the fraction of positive predictions for both groups. We want the members of each group to be selected at the same rate. On the flip side, we don’t really care about how accurately they are selected which is really problematic.
  • Performance difference/ratio: You can calculate all the standard performance metrics like false-positive rates, accuracy, precision or recall (equal opportunity) for privileged and unprivileged group separately and see the difference or a ratio of those values. 

Machine Learning model fairness

 

  • Entropy based metrics: Generalized entropy is calculated for each group and then compared. This method can be used to measure fairness not only at a group level but also at the individual level. The most commonly used flavor of generalized entropy is the Theil index, originally used to measure income inequality.

So which one should you choose? Be empathetic of your users and think how they would measure fairness and find a metric that reflects that.

What can you do to make your model less biased?

In a perfect world, you should start thinking about fairness at the data collection level. However, in many real-life situations, the dataset has already been prepared, the features have already been extracted or even the model has already been trained. Don’t worry, in each of those situations you can apply de-biasing techniques to make your model fairer:

  • Pre-processing, the bias is removed from the data before training the model. One example is Learning Fair Representation where the original features are transformed to a fair, latent space. It lets you mitigate bias at the data level and use any framework you like for modeling. The problem is that bias is often hidden across many features and truly removing it is just hard in practice.

 

  • In-processing: the information about sensitive features is used to guide model training. An interesting approach is Adversarial De-biasing where apart from the original task of predicting a class the model is also predicting the sensitive feature and is penalized for doing it correctly. It brings a lot of promise and seems to be working pretty well. However, there is only a very limited amount of frameworks/projects at your disposal.
  • Post-processing: the predictions of the model are adjusted to minimize some fairness metric. For example, one can reweigh predictions to make the prediction distribution for privileged and unprivileged group equal and hence minimize the equal opportunity metric. It lets you use all the tools you want and allows you to apply fixes post-hoc but it doesn’t work with many fairness metrics.

As of today, the available methods are still far from perfect but with the amount of research that goes into model fairness, we can expect rapid improvements.

Tooling

There are a few tools out there that deal with fairness metrics calculations or de-biasing of a sort (Facets, FairML, FairNN). However, the only complete and popular framework that I know of is AIF360 from IBM. 

Machine Learning model fairness

AIF360 demo

With AIF360 you can calculate the metrics and use sklearn-like interface to de-bias your models in all three flavors: pre, in, and post-processing. It has its quirks but overall is a really helpful tool that you may want to incorporate into your workflow.

[Related Article: Interpretable Machine Learning – Fairness, Accountability, and Transparency in ML systems]

Bonus

I wanted to make it easy for everyone to start tracking the fairness of machine learning models and I added a log_fairness_classification_metrics function to the neptune-contrib library.

It abstracts away some boilerplate from AIF360 and makes it really easy to use. 

So, if you want to keep track of all the major fairness classification metrics and charts for your experiments simply run:  

import neptunecontrib.monitoring.fairness as npt_fair

npt_fair.log_fairness_classification_metrics(      test[‘two_year_recid’], test[‘recid_prediction’], test[[‘race’]],
      favorable_label=0, unfavorable_label=1,      privileged_groups={‘race’:[3]}, unprivileged_groups={‘race’:[1,2]})

Conclusions

In this blog post you’ve learned how to measure machine learning model fairness, what can be done to mitigate it and which tools can be used to do it.

Of course, we’ve only scratched the surface here but I hope I got you interested and you will now know how to start taking fairness into consideration in your data science projects.

Editor’s Note: See Jakub’s talk “Model Fairness in Practice” at ODSC West 2019

Jakub Czakon

Jakub Czakon

Jakub is a Senior Data Scientist at neptune.ai data science collaboration hub. Before neptune.ai he worked at deepsense.ai and others where he delivered machine learning projects for facial recognition, OCR, cancer detection, satellite image segmentation, NLP on job market data, and more. https://twitter.com/neptune_ai https://neptune.ai/ https://www.linkedin.com/in/jakub-czakon-2b797b69

1