Complex ‘black box’ models are becoming more and more prevalent in industries involving high-stakes decisions (such as finance, healthcare, insurance). As machine learning algorithms take a prominent role in our daily lives, explaining their decision will only grow in importance via explainability.
By now there is enough written about why we may want to explain the decisions and behavior of machine learning models. Model debugging, bias discovery, and increased social trust and acceptance are some of the important and often cited reasons.
What is machine learning explainability?
Before we go into further detail, let’s define explainability. I need to acknowledge that there is no consensus on what machine learning explainability is but the definition I tend to stick to is that explainability (XAI) are:
“Methods and models that make the behaviour and predictions of a machine learning model understandable to humans.”
Types of explainability: What can be explained?
What type of explanations can we provide? We can differentiate between:
- Intrinsic and post-hoc methods: Intrinsic methods relate to restricting the complexity of the model and/or features before the model training; Post-hoc methods apply an explainability technique after the model training.
- Model-specific and model-agnostic: Model-specific methods focus on explaining the behavior and decision or a single type of algorithm, whereas model agnostic methods work with any type of a model
- Local and global: Local methods explain the decision of the model for each instance in the data whereas global methods explain the overall model behavior
Figure 1 provides a succinct taxonomy of the most common model-agnostic methods. There are various algorithms to explain the data and/or the model. Some are visual methods, and we have the abovementioned distinction between global and local methods. Don’t be overwhelmed if some of the methods are new to you. The purpose of this flowchart is not to introduce you to a new XAI method but to help you understand how they all fit together. This taxonomy could further help facilitate conversations between various stakeholders involved in a use case. But sadly, only this chart is not sufficient.
Explainability in the ML development cycle
We cannot talk about XAI in practice without discussing where XAI fits in the overall machine learning development cycle. Let’s say that we have a clearly defined business problem (I know that is wishful thinking in many scenarios). Even less believable, the input data is clearly identified and even provided to us.
The first important step is to gather the XAI requirements of all stakeholders — if explainability is relevant for the use case, of course. The set of stakeholders will differ with the use case. For example, let’s assume we are using machine learning to help decide whether we should grant a loan to a client. In this case, relevant stakeholders could be:
- the loan officer or relationship manager,
- the client themselves,
- the technical team building and deploying the model,
- the business stakeholders who accept the risk of the model,
Depending on the broader organizational setup, we could also have
- model validation team, compliance, privacy office and legal teams, audit
- In some countries, there are external regulators who may have their own explainability requirements.
And how can we gather the requirements of all these stakeholders? Often utilized approaches are structured and less structured interviews, surveys, and workshops.
Keep in mind that it is very likely that different stakeholders will have different XAI requirements, and thus we will end up utilizing multiple approaches.
The next step is to train our machine learning algorithm of choice (taking into account the XAI requirements, if relevant).
The third step consists of applying the approach or set of approaches. I cannot stress this enough that various XAI techniques help you answer different questions, and sadly there is not a single one that works across scenarios (despite the popularity and nice properties of Shapley values).
And let’s be optimistic (or unrealistic?) and assume the model will be deployed in production. As the model is deployed and we monitor its predictions and explanations, we should ensure we monitor and track the users’ interaction with the explanations and record any possible feedback.
Many of you may know that model monitoring and maintenance is a whole can of worms on its own. Having XAI added to it does not make it easier. While it is (relatively) easy to compare the model’s prediction versus what really happened, it’s much more difficult to validate the explanations or provide a quantitative measure for their accuracy, validity and fidelity.
XAI is definitely a very exciting and important (and a little bit hyped) field. While researchers are still developing new approaches, applying various XAI methods in practice comes with its own challenges. One thing to keep in mind: The earlier in the project you start thinking about XAI, the easier it will be to incorporate it. However, when it comes to XAI,
There is no one-size-fits-all approach, it’s a process rather than a single product.
If you are interested to know more about explainability, about how to pick an approach that addresses different questions from your stakeholders, then join my talk at ODSC East 2021. I am looking forward to that!
Violeta Misheva, PhD, has been working as a data scientist in the financial industry for the past few years. Before that, she worked in consulting, which marked the start of her data science career. Violeta developed an interest for data and algorithms during her PhD studies, and that passion has only grown over time. She is passionate about responsible machine learning, especially topics such as explainability and fairness.