The Ethics of Artificial Intelligence, Part 4: Interpretability

ML-Interpretability.png

Over the last three weeks I’ve talked about key components to including ethics in this data science field, including transparency, trust and ground truth. Today I want to conclude my four-part series of the ethics of artificial intelligence by talking about interpretability of machine learning algorithms. In particular, I want to bring up a few points discussed in a Fast Forward Labs webinar I attended 5 September on this topic.

If you’ve been following the news in the world of machine learning within the last year, you’ve probably heard the many uses for these algorithms in making decisions. Machine learning algorithms can be used to determine if someone is more susceptible to having a heart attack, if a tornado will strike a town, predict elections, determines who gets accepted for a mortgage application, selects the best job candidate and even drive cars. These are just a few applications of big data so it’s important to ask hard questions like what’s going on in the algorithms that make the decisions. In order to begin to trust the outcomes and predictions of machine learning and implement these solutions across more domains, the algorithms need to be interpretable by everyone.

For those of you not familiar with machine learning, you can learn more from my YouTube series “Data Science in 90 seconds” or read a good working definition of it here. Basically a computer can automatically produce some type of output without programming but still after some type of inputs go into a black box. Knowing what goes in this black box, what happens inside the black box and the relationship to the output is the topic of today’s conversation.

blackbox

A great O’Reilly article by Patrick Hall and colleagues talks about two types of machine learning interpretability: global and local. Interpretability is the ability we have to understand the steps the computer is making to arrive at its decision. Global interpretability is understanding how the machine learning model works based on its input and the relationship between the inputs and results. Local interpretability on the other hand, ‘promote understanding of small regions of the conditional distribution, such as clusters of input records and their corresponding predictions.’ Ideally to inspire trust, we need both global and local interpretability.

There are (at least) two ways to ensure that machine learning algorithms are interpretable: choose algorithms that by definition are easier to interpret such as logistic regression and decision trees or apply a Local Interpretable Model Agnostic Explanation, otherwise known as LIME after the algorithm has run. Examples of how LIME works were discussed in the webinar I mentioned earlier in this blog. Basically LIME provides approximates how an algorithm is working by representing it with an interpretable linear model. It explains this approximation of how a particular algorithm works locally for processing text or images.

So after four weeks of discussion on ethics in artificial intelligence, we’ve come to a good pause in the conversation. Let me know if you would like me to discuss other aspects of ethics or if you find other interesting research in this field.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s