Google Cloud

Opening up the Black Box

Enabling interpretation of AI models

In the quest for more accurate AI, the availability of computing resources coupled with increasing dataset sizes has fueled a trend towards more complex non-linear models. These highly complex models allow developers to find unseen correlations using vast volumes of data. New insights can be made, enabling organizations to take massive leaps against their competitors. Unfortunately, these complex models have become increasingly opaque. This article introduces your organization to creating interpretable and inclusive ML models using Explainable AI.

Building better Models 
The usefulness and fairness of AI systems largely depend on the ability of data scientists to explain, understand and control them. Explainable AI enables users to understand feature attributions on AI Platform and AutoML, and to visually inspect model behavior using the What-if Tool. As a result, our tools support data scientists to design models that have minimal bias and drift. This enables your developers to create models with high precision and recall. Additionally, with models that are easily explainable and transparent, end-users can gain more trust in machine learning. 

Start Interpreting your Models with Feature Attribution
Both AutoML Tables and AI Platform are able to show developers how important a feature was for the building of a model. This way, developers can take control of biases and unbalanced or unnormalized datasets. Both solutions also have capabilities to show the feature attributes for a single sample. 

For example, a deep neural network is trained to predict the duration of a bike ride based on weather data and previous ride-sharing data. If you request only predictions from this model, you get predicted durations of bike rides in number of minutes. If you request explanations, you get the predicted bike trip duration, along with an attribution score for each feature in your explanations request. The attribution scores show how much the feature affected the change in prediction value relative to the baseline value that you specify. Choose a meaningful baseline that makes sense for your model - in this case, the median bike ride duration. You can plot the feature attribution scores to see which features contributed most strongly to the resulting prediction:

 

Moreover, AI Platform allows models to explain why certain images are classified as such. For example, it enables developers to find out more about why an image is falsely classified and to take steps accordingly. An image classification model is trained to predict the species of a flower in the image. If you request predictions from this model on a new set of images, you will receive a prediction for each image ("daisy" or "dandelion"). If you request explanations, you will get the predicted class along with an overlay for the image, showing which areas of the image contributed most strongly to the resulting prediction:

 

Let your eyes do the work
The What-If Tool makes it easy to efficiently and intuitively explore up to two models' performance on a dataset. Investigate model performances for a range of features in your dataset, optimization strategies and even manipulations to individual data point values. All this and more in a visual way that requires minimal code. 

Data scientists can keep working with their IDE of choice, due to compatibility with TensorBoard, AI Platform, Jupyter and Colaboratory notebooks.

Start reading the What-if Tool Tutorial; watch the video below for a small introduction on the What-if Tool or watch more in-detail episodes here.

 

Interested in how your organization can start using Explainable AI? Please click here to find more information in our Explainable AI Whitepaper.

Sources: