PyData Socal: Explaining Black Box ML Predictions

PyData SoCal: Sameer Singh explains how a classifier that appears to be accurate is not.

Tonight I joined the first Southern California PyData meetup. It featured two speakers discussing how to better understand the predictions made by machine-learning models, and why it might be important to do so. I was impressed by the capabilities of the packages demonstrated and the likely importance of having such capabilities as we move forward with deep learning-based automation that could cause catastrophic results if it fails in unexpected ways.

LIME: Local Interpretable Model-Agnostic Explanations

PyData SoCal: Sameer Singh wraps up by reiterating the importance of being able to interpret models’ outputs.

First, we heard from Dr. Sameer Singh, who discussed the LIME (Local Interpretable Model-Agnostic Explanations) package that he and his colleagues have developed. He started by noting that the existence of so many “black box” models can increasingly be a problem. These include regulatory issues (for example, medical authorities refusing to accept diagnoses that cannot be explained), management reluctance to depend on tools that cannot be understood, and vulnerability to what he called “stupid mistakes,” or more accurately models behaving in ways that could not be anticipated. As an example, he showed a set of photos of wolves and huskies and a classifier that apparently did a pretty good job of differentiating between the two. In fact, it generally did a better job. But as he also demonstrated, once we understood how the classifier was working, it was clear that the positive results were an artifact of the specific photos used. Photos containing snow were classified as wolves, while snow-less photos were deemed to be huskies. The complex neural network that seemed to do so well at differentiating between very similar animal species was nothing but a snow detector!

LIME will show you which areas of a photo are being considered by the classifier. In text analysis it shows which words appear to have the most weight in the prediction. When looking at categorical data, it will tell you which categories are the most important.

The LIME project is available at: https://github.com/marcotcr/lime. An introductory video is also available online:

Skater: Interpretation of Predictive Models

PyData SoCal: Pramit Choudhary presents the open-source Skater package

Then we heard from Pramit Choudhary, a Lead Data Scientist at DataScience Inc., who discussed one of their open-source projects called Skater. It builds on the capabilities in LIME to offer a relatively simple package in Python for those who need to be able to explain the internal decisions of predictive models in user-friendly terms.

Skater is designed to be used both before and after a model is deployed in production, making it useful for all phases of model development, deployment, use and ongoing improvement. It’s available at https://github.com/datascienceinc/Skater and is currently operating as an open-source project seeking contributors.

Pramit’s presentation was necessarily shorter than the Sameer’s but I found it very helpful for understanding how and where I might deploy such a tool. It immediately occurred to me that using Skater (or any similar package, and I can only imagine there will eventually be others), could be useful in assessing models so as to avoid rare but potentially catastrophic missed predictions, sometimes known as “black swans.” In my experience, such things often happen when models exclude important factors or depend on irrelevant noise. Recognizing what factors are and are not included in a model could help us create better ones. (Sadly, they won’t prevent the types of actual malice that occur when financial incentives to ignore risk are irresistable!)

I also thought it is likely to be useful in an area that was mentioned in the first presentation, diagnosis and treatment recommendations for medical issues. I’m experienced with health IT, have worked frequently with medical data in the past, and did my capstone project for last summer’s data science program using a diabetes dataset, so I have some notion of how this can work and what the pitfalls are. One of the difficulties of dealing with such data is the complexity of the models and the data available. This will only become more difficult as the models expand to include not only categorical data but imaging, sound, and even video. From both an ethical and legal standpoint, relying on highly-complex “black box” diagnoses is problematic, but a “black box with explanation” is a lot more useful, especially when the explanation can be verified in the real world. Skater allows for that.

This kind of interpretation will be essential for the development of self-driving cars and other potentially-hazardous automated devices. Evidence suggests self-driving vehicles are already safer than human drivers (not perfectly safe, but safer), bit it is unlikely that widespread acceptance will only happen until we can explain what these algorithms are really doing and why they behave the way they do. Packages like Skater will help.

All-in-all, a great start for PyData SoCal. We’ve been one of the few major cities without a chapter and it’s good to see there finally is one.

Also thanks to DataScience Inc. who host this event as well as many of the other Data Science and related events in Los Angeles.