The world is rapidly adopting more and more machine learning solutions in ever more sectors, including some heavily regulated ones. At the same time, fear of using algorithms in settings where they could potentially have harmful effects is increasing, and regulators have started taking a closer look at how companies and governments are applying machine learning solutions and whether they could harm their users.
A particular concern is often that these algorithms present ‘black box’ solutions that make it impossible for anyone to understand how they really work and how predictions are generated. However in recent years, a number of new explainable AI methodologies such as SHAP (Lundberg et al, 2017) have allowed us to explore and better understand the inner workings of these algorithms and their individual predictions, so these models are not really black boxes anymore.
Still, the technical know-how and manual input needed in order to generate these explanations form a barrier to making these explanations accessible to not only data scientists, but also to other stakeholders, management, staff, external regulators and, finally, customers.
Management and other stakeholders ultimately need to decide whether to go forward with a proposed machine learning solution. To do so, they need to feel confident about what is in the model and how predictions are generated. External regulators need to be able to assess whether the model violates any local or European regulations, such as the GDPR. With the GDPR, consumers have a right to an explanation whenever an automated solution affects them in some material way. Finally, many real-world machine learning deployments involve human-in-the-loop solutions, where a human decision-maker can choose to overrule the algorithm. In such cases, this person must understand how the model works in order to judge when it should be overruled.
With the explainerdashboard library that I developed in the past year it is easy to build rich interactive dashboards that allow even non-technical stakeholders to explore the workings and predictions of machine learning models.
Introducing SHAP
With the introduction of SHapley Additive exPlanations (SHAP) (Lundberg et al 2017;2020) it became possible to answer the following question for each feature input for each prediction of almost any machine learning algorithm: “What was the contribution of this feature to the final prediction of the model?” This allows us to calculate the contribution of each feature (column) to the final predictions for each row of data (e.g. a specific customer). The contributions of each feature are guaranteed to add up to the final prediction.
Even though the implementation details differ for the various algorithms (e.g. scikit-learn RandomForests, xgboost, lightgbm, PyTorch, TensorFlow, etc), with the shap library, generating SHAP values is now really straightforward for a data scientist:
import shap |
|
model = RandomForestClassifier().fit(X_train, y_train) |
|
explainer = shap.TreeExplainer(model) |
|
shap_values = explainer.shap_values(X_test)[1] |
In this example we used the canonical kaggle Titanic dataset to predict the likelihood of particular passengers surviving the sinking of the Titanic. So we can now see what predicts the survival of the first passenger in our test set. In this case, the fact that the passenger was a male with a 3rd class ticket, and that the ticket fare was low, seem to have contributed to a low expected chance of survival:
shap.initjs() |
|
shap.force_plot(explainer.expected_value[1], shap_values[0,:], X_test.iloc[0,:]) |
We can also look at the general effect of a feature such as ‘Age’ on predictions in our data set. Here we see that older people generally were predicted to have a lower chance of surviving:
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.