Explainable machine learning is a sub-discipline of artificial intelligence (AI) and machine learning that attempts to summarize how machine learning systems make decisions. Summarizing how machine learning systems make decisions can be helpful for a lot of reasons, like finding data-driven insights, uncovering problems in machine learning systems, facilitating regulatory compliance, and enabling users to appeal — or operators to override — inevitable wrong decisions.
Of course all that sounds great, but explainable machine learning is not yet a perfect science. The reality is there are two major issues with explainable machine learning to keep in mind:
- Some “black-box” machine learning systems are probably just too complex to be accurately summarized.
- Even for machine learning systems that are designed to be interpretable, sometimes the way summary information is presented is still too complicated for business people. (Figure 1 provides an example of machine learning explanations for data scientists.)
For issue 1, I’m going to assume that you want to use one of the many kinds of “glass-box” accurate and interpretable machine learning models available today, like monotonic gradient boosting machines in the open source frameworks h2o-3, LightGBM, and XGBoost.1 This article focuses on issue 2 and helping you communicate explainable machine learning results clearly to business decision-makers.
Sorry. No data so far.