• Revela

Explainability in AI

Updated: Mar 3

What is explainability in AI?


“Thirty-seven percent of organizations have implemented AI in some form. That’s a 270% increase over the last four years.” (Gartner, 2019) As AI is becoming ubiquitous in the business world to assist with decision making the concept of “explainability” is becoming much more important. Explainability in AI is the ability to understand the process inside the “black box”, and discern how the AI is deciding outcomes. Increased explainability allows for greater trust in AI decisions, easier problem solving and the elimination of bias.


Explainability in AI may use a variety of different frameworks, which we detail throughout this article.


Why is explainability important?


Increased Trust


Having explainability in your models is important, especially when those models are being used by non-technical users. Business leaders may be hesitant to trust the results of an AI model when they do not understand the decision criteria and basis for the model. Utilizing an explainability framework will make it easier for business leaders to adopt your models, as they will understand the framework behind the AI decision making process.


Easier troubleshooting and improvement


If you understand your model and its decision making process, it is easier to fix problems if something goes wrong. It also becomes much easier to improve your model to make better predictions if you understand its operation.


Understanding and eliminating bias


Having clarity into the “black box” of AI will allow data science professionals to understand how bias in models may occur. Explainability in your AI models will allow your organization to eliminate bias before it happens avoiding reputational and monetary losses.


For example, Amazon, in pursuit of automating its recruitment process, created a machine learning algorithm to help choose top candidates for positions at the organization. The program was trained by analyzing the resumes of applicants that applied to the company over a ten year period, and seeing who had been successful. What Amazon did not anticipate was that most of the applicants in its data set were male, reflecting male dominance in the tech industry. The program quickly learned to prioritize male candidates over female candidates.


If Amazon had pursued an explainable AI approach it could have eliminated the bias in its model before it happened and avoided an embarrassing outcome.


Explainability Frameworks


SHAP Values


SHAP values explain the impact each variable has on the AI’s decision output, giving a table format scoring each variable’s influence. SHAP values create an easily explainable visualization of an AI model.


Saliency Maps

Saliency maps are usually used for explainability in photo based models. They colourize pixels in a visual format, emphasizing which areas of the photo are most important for the AI’s classification process.


LIME (Local Interpretable Model-agnostic Explanations)


The LIME framework is another commonly used framework in explainable AI. As stated, it is model-agnostic, and thus can be applied to any AI model. LIME creates artificial data that is similar to the data used in the model, but with small changes, in order to observe the impact each attribute has on the model’s results. Using LIME will help you understand which features of your data set have the most impact on the model’s results.


Explainability with Neural Networks


Within the realm of AI, neural networks are the most difficult to explain, due to their nature. Neural networks are modelled on the human brain, and are able to perform deep learning – unsupervised machine learning usually using unstructured data. Because neural networks have so many layers and are extremely complex, just like the human brain, it is extremely difficult to understand its decision making process. Below are two explainability frameworks specifically for deep learning with neural networks.


Activation Atlases


Created by Google and OpenAI, activation atlases allow visualization of a neural network and how its nodes interconnect and interact. Commonly used for visualizing photo identification networks, an activation atlas is in theory asking a neural network to run “backwards”, displaying which inputs trigger certain outputs. The activation atlas demonstrates in a visual manner which features would activate the neural network to come up with a certain output. Here is an example. Activation Atlases are a fairly new development, but are increasing explainability in photo based deep neural networks.


Neural Backed Decision Trees


Neural backed decision trees use the same concept as the decision trees mentioned above, but are specifically for neural networks. In a neural backed decision tree, each node represents a neural network. This gives a high level overview of the decision process, and allows understanding of the “black box”, without needing to understand the extremely complex inner workings of each node in a neural network.

778-817-1011

100-838 Fort St

  • LinkedIn

©2021 by Revela.