Lessons learned in Explainable AI
Lessons learned in Explainable AI
If you work in machine learning, you most likely have heard of Explainable AI (XAI). Stakeholders (businesses, regulators, patients, etc.) will usually not trust a black black box algorithm. They want an explanation for how the algorithm made its decisions. This is a hard problem, as ML algorithms can not speak for themselves and tend to be incredibly complex. Usually, us data scientists will therefore use a tool like LIME or SHAP, create some charts and hope that our stakeholders are comfortable with the results.
This article outlines some of the issues that XAI faces, not from a technical perspective but from the contexts in which XAI is used.
Explaining enough to be dangerous
1. XAI does not explain, it tells stories
The word “explanation” usually refers to a description of the causal pathways that lead to a certain outcome. In practice, almost all explanations are simplifications, omitting less relevant details in the service of brevity and clarity. XAI too is in the business of simplifications. If we wanted a complete layout of how the model makes decisions, we could simply print out the entire model, with its thousands or millions of parameters. Since we already have access to the full model, XAI is really about simplifications that can be communicated, not full explanations. And the best way to simplify and communicate a complex model is with a story. All successful uses of XAI that I have seen allowed stakeholders to tell themselves a story. “Successful” refers to stakeholders liking the explanation of course, not to them actually being better of because of it.
2. Stakeholders (and ML practitioners) frequently abuse XAI
A story by itself is not yet a cause of grave concern you might say. If it makes stakeholders more comfortable and allows them to get at least a rough idea of what is going on it can not be bad. What could possibly go wrong? Most, if not all, of your stakeholders are looking at your “explanation” in search for guidance, not understanding. Especially business stakeholders live in a world of actions, and they will take actions based on your XAI plot before you can scream “correlation does not imply causation!”. This is especially true if the XAI story ties in to other stories the stakeholders already has in mind. “Your model predicts that customers that have a pet are more profitable? Let’s offer discounts to pet owners to attract more. I got to catch a flight now. Bob, please prepare the pet campaign by Monday.”
For most business stakeholders, counterintuitive insights that lead to better actions are the reason they engaged with machine learning at all. XAI was not made to give advice to management, but a compelling story plus the “backed by big data” label forms a potent cocktail few executives can resist.
3. Following proper scientific methods is tough in XAI
When we use XAI, we face two problems at once: We need to capture the workings of our complex model, and the model needs to capture the even more complex reality. On top of it, we can usually not run experiments in the real world, for cost or ethical reasons. We have no way of evaluating our story through hypothesis testing. This is a critical difference to the stories that regular science tells itself: Science can test, XAI can not.
Without a way to falsify XAI explanations, acceptance of XAI becomes a matter of belief. XAI is a kind of cargo cult science. If this sounds harsh, you may find comfort in the fact that almost all sciences dealing with complex, dynamic systems in which testing is hard deal with similar problems. Biomedical sciences, psychology and economics, all grapple with this problem.
Towards a more honest XAI
How can we circumvent the dangers of XAI? There is no silver bullet but there are viable approaches:
1. Arrive with a healthy dose of skepticism
As experts, we need to be extra skeptical of the outputs our methods produce. Non-expert stakeholders might be easily fooled, it is our job to protect them from the narrative fallacy!
2. Check for understanding
A good way to find out if stakeholders really get insight from the explanation, is to run a few user testing studies with them. A good example is to make “forward predictions”: Give a stakeholder a sample and the explanations generated by your XAI method. Then ask what the model has predicted. The more accurate the stakeholders guesses, the better they can follow the model.
3. Test joint performance
A second step is to do joint human-machine performance testing. Here you provide the human with the sample, model prediction and XAI explanation and then ask them to make a decision (e.g. to predict the real outcome). You can now test how well the human performed together with the machine. If the predictions and explanations are good, the human will be able to use them and make good decisions. If the explanations are poor, the human might get distracted by them and might make worse decisions than just the machine alone. If the predictions are poor, the human will not trust the machine at all and performance will be poor as well.
4. Clearly separate prediction from causal inference tasks
Predictive performance is one of the evaluation criteria of a scientific theory. However, when structuring ML projects, it should be clear from the get go if you are after predictions only or if you plan to do some inference as well. If there is only a slight possibility that someone will use your XAI outputs from decision making, either provide a framework mapping XAI outputs to possible decisions (if experimentation cost is low) or strongly push back and show how XAI is not a basis for decision making.
5. Use simple models
XAI is hard because it tries to synthesize a big complex model into a simple narrative. Using a simpler model in the first place greatly aids the task. Complexity is not only driven by the choice of model, however. Linear regression with 1 million inputs is harder to interpret than a neural network with only 4 inputs.
6. Make it easy to appeal to model decisions
As part of being generally skeptic, you should be prepared for the case when your model gets it wrong and you can not explain a decision. Make it easy for people to appeal to decisions concerning them. Assume the model to be guilty, and not the people. Provide ways for the model to train on the cases that it got wrong.
Parting words
Getting into the habit of being skeptical and investigative about XAI takes some getting used to. One mental bridge is to imagine a “skeptic hat” which you put on every now and then. The hat makes you see thing from a different perspective and allows you to question your work without getting downtrodden from doubting all day.
I hope this post has made you a little bit more skeptical and aware of potential dangers. Happy doubting!