Google tells AI to explain itself

Google tells AI to explain itself

Google has added Explainable AI services to its cloud platform, in an effort to make the decision making processes of machine learning models more transparent to users, and thus build greater trust in the models themselves.

Announced on the Google Cloud Blog, the new capability is intended to improve the interpretability of machine learning models. But this is no easy task, as Google admits. Google Cloud AI Explanations takes the approach of quantifying each data factor’s contribution to the output of a particular machine learning model to try and assist the human user in understanding why the model made the decisions it did.

In other words, it is a far cry from an explanation in layman’s terms, and will only really make sense to the data scientists or developers that are building the model in the first place.

It also has limitations in that attributions will depend on the model and the data that was used to train the model, according to Tracy Frey, Director of Strategy for Google Cloud AI.

“Any explanation method has limitations. For one, AI Explanations reflect the patterns the model found in the data, but they don’t reveal any fundamental relationships in your data sample, population, or application. We’re striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about its limitations,” she said.

Explainable AI is seen as an important goal, not only for helping provide feedback to developers on whether machine learning models are making the right decisions, but also to make AI more acceptable to senior executives within an organisation, who are ultimately responsible for any decisions.

Google notes that while machine learning models can tease out correlations between enormous numbers of data points and provide greater accuracy, the way they work can be opaque. Even inspecting the structure or weights of a model often tells you little about a model’s behaviour, which means that for some decision makers, especially in highly regulated industries or those where confidence is critical, AI can be out of bounds without some kind of interpretability.

Along with AI Explanations, Google also released what it calls model cards as documentation for two features of its Cloud Vision API, Face Detection and Object Detection.

Model cards are documents detailing the performance characteristics of ready-trained machine learning models such as those just mentioned, and are intended to “provide practical information about models’ performance and limitations” in order to help developers make better decisions about what models to use and how to deploy them responsibly.

According to Google, model cards are based on an approach it proposed in a white paper it published earlier this year, Model Cards for Model Reporting.