Using LIME for image classification explainability (part 1/3)

This is the first part in a series of blog posts about my thesis project at BrainCreators.In this introductory part, I will cover some basic concepts regarding AI explainability, the LIME algorithm, give a short introduction to the python LIME library, and discuss some possible modifications of it.

In the next part, I will show how we can move from producing an “explanation” to producing a “justification” using LIME, and discuss some possible uses this might have.

Finally (and hopefully …) after completing my thesis project, I will write a third part to show how this concept can be incorporated in practice for a better and more beneficial human-AI dialogue.

Introduction: What is AI Explainability?

Due to AI’s growing influence on more aspects of life, recent years have shown a growing need for “Explainable AI”, i.e. – AI which not only performs well on given tasks, but is also interpretable by humans (and not necessarily AI experts).

Think, for example, of a doctor who uses an AI algorithm which classifies tumors to “malignant” or “benign”. Having a “black box” algorithm which outputs one of the classes is simply not enough for the doctor to make important health-care decisions. An algorithm which also outputs an explanation for such a classification (e.g. “This tumor is benign because …”) is much better suited for incorporation in important decision-making processes which inherently also involve a human in the loop.

This argument goes well beyond the medical industry. In fact, the EU’s General Data Protection Regulation (GDPR) which went into affect this May, also adopts (to a certain extent) a person’s “Right to explanation”, meaning that one “should have the right not to be subject to a decision … which is based solely on automated processing … without any human intervention” and that any such process “should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision” (see Recital 71).

The LIME algorithm and library

One of the prominent means of producing an “explanation” for an AI’s decision is the LIME algorithm.

LIME stands for “Local Interpretable Model-agnostic Explanations”, and the algorithm was first discussed in the 2016 paper “’Why should I trust you?’ Explaining the Predictions of Any Classifier”.

In essence, the idea behind LIME is to find, for a given classification model and a given classification instance – be it an image, textual document, or table – a “simple” approximation which helps to “explain” how the complex model “behaves” at that certain point.

Figure from lIME paper

Original figure from LIME paper: finding simple approximation for a complex model at a given point

The Python LIME library (available here or by running “pip install lime”) implements this idea in practice for tabular, textual and image data.

There are several existing tutorials that show how to use the LIME library on image classifiers. The rest of this section will use the library to better show how the “inside” of the algorithm works, and also give some helpful tips for integrating it with PyTorch and speeding up its performance.

Lime: from theory to practice

Now that we’ve covered the basic idea of LIME, let’s take a look at how the implementation works in practice (a Jupyter notebook with code used to generate these examples is available here).

For example, we’ll use the following image, taken from the ILSVCR2014 dataset, and a pretrained ResNet classifier that was trained to classify images to different types of balls.

Dog with ball

The first thing the LIME implementation does is to use one of scikit-image library’s segmentation algorithms to segment the image. The default algorithm that is used by the LIME library is the quickshift clustering algorithm which produces a good and granular segmentation of the image into “super pixels”, but is quite computationally heavy (we will come back to this point later).

Dog with ball

In the next phase, LIME creates several variants of the original image (default is 1000) – each with a different combination of “fudged” super pixels and original pixels.

Dog with Ball Dog with ball

Then, for each LIME does two things. First, it computes its distance from the original image (using sklearn’s cosine pairwise distance method). Second, it uses the classifier (in our case – the pretrained ResNet) to classify this variant.

Having the classification for each of these variants (and their distance from the original image) enables LIME to, finally, create an optimal linear model of the “importance” of each image segment to the eventual class.

This new model can be generated using LIME’s explain_instance method. After generating it, we can ask for the “top segment” or “top X segments” which explains the classification of this image as a tennis ball (corresponding to the highest coefficient in our linear model).

Dog with Ball Dog with ball

of course we can also ask for the top “pros” (positive coefficients) and cons (negative coefficients) for classifying this image as a tennis ball (or any other class for that matter).

dog with ball

hese segments, in turn, can be used to “explain” a classification made by a classifier to a human user.

Some final remarks regarding the LIME library

First, note that LIME treats the classifying model as a “black box”, and thus does not depend at all on the architecture of the original classifier (this is LIME’s Model-Agnostic attribute). This is a really cool feature that enables to get an “explanation” for any classifier, and also compare explanations given by models which use completely different methods.

While this is definitely the case, the current LIME library was designed to work with TensorFlow and Keras. Several minor changes had to be implemented to integrate it with PyTorch – namely converting some variables to Tensors (I haven’t tested LIME with the new PyTorch implementation – so perhaps these adjustments are no longer needed).

A second point worth mentioning is that producing this explanation can be quite computationally heavy. The original LIME tutorial cites 7.65 minutes for generating an explanation for a given image.

While this might be enough for several use cases, it was not sufficient for the needs of my thesis project. After conducting further performance analysis, it was obvious that the main “bottle neck” of the process was generating and classifying 1000 variants of the original image. I was able to use several approaches to get better performance for this:

1. Using some of the great hardware available here at BrainCreators, and integrating LIME with PyTorch (as discussed above) to utilize the GPU for classifying the different variants. This alone drove time the explanation generation time to around 10 seconds.

2. Using a less granular image segmentation algorithm (both k-means clustering – slic – and Felsenszwalb’s efficient graph based image segmentation seemed to work fine) was beneficial in two ways. Directly – as conducting the image segmentation itself took less time; and indirectly – as the number of possible segment combinations for every image was much smaller, thus enabling to use a smaller number of variants for classification while still producing a good linear model. This enabled to drive the time needed for generating an explanation down to 1 second. It should be noted that if multiple explanations for classifying the same image are planned – it would probably also be a good idea to cache the image segmentation outcome altogether.

A third, final, point, is that LIME produces explanations in the form of image segments. As my project needed explanations in the form of bounding boxes (more on that in the next post) – I’ve also looked at several ways of converting these segments into bounding boxes. One way was to create one bounding box that includes all segments in the LIME explanation; and a second way was to include bounding boxes around each “area” in the explanation.

For example, from these segments which represent the top 2 features explaining the classification of the image as containing a basketball:

Basketballers

It is possible to produce one big bounding box (shown at the top) or two bounding boxes (shown on the bottom):

basketball focused on knee and ball Basketballers using lime's explain_instancemethod classifying the image focused on knee and ball Basketballers using lime's explain_instancemethod classifying the image

basketballers Basketballers

The code for the adjusted LIME implementation, with all aforementioned changes, can be found here.

This concludes the introductory post about AI explainability and LIME. As shown, LIME could be a very useful way of generating an “explanation” to help integrate an AI classifier within a bigger decision-making process with a “human in the loop”.

However, it requires tuning several hyper-parameters, inter alia the number of image segments to include in a given explanation. This tuning can very depending on the complexity of the image and the classifier used on it, and it is quite hard to find a non-qualitative measure for how “good” an explanation is.

In the next post, I will discuss about the concept of “justification” (built on the LIME “explanation”) and how to implement it for image classification.

Leave a Reply

Your email address will not be published. Required fields are marked *