IBM’s answer to AI bias? Don’t leave spotting it to humans alone…

IBM’s answer to AI bias? Don’t leave spotting it to humans alone…

IBM has offered to help its Watson customers from self-inflicted foot wounds, by automatically flagging up potential bias issues in their AI and machine learning models.

Watson OpenScale is a service that monitors users’ AI and machine learning to track outcomes and monitor models for explainability and compliance and identify and mitigate algorithmic bias.

That latter point is particularly tricky, with companies including Amazon being called out for pushing AI-based products accused of discriminating against particular groups.

IBM offering manager Susannah Shattuck wrote in a blogpost yesterday that “Starting today, we are making it easier to detect and mitigate bias against protected attributes like sex and ethnicity with Watson OpenScale through recommended bias monitors.”

To date, she continued, users had been able to manually select “which features or attributes of a model to monitor for bias in production, based on their own knowledge”.

OpenScale will now feature “recommended bias monitors” which will “automatically identify whether known protected attributes, including sex, ethnicity, marital status, and age, are present in a model and recommend they be monitored.”

This will help prevent customers overlooking such attributes, she said, and ensure bias against them is tracked in production.

She added that it was working with compliance experts at IBM’s Promontory subsidiary to continue expanding the list of attributes “to cover the sensitive demographics attributes most commonly referenced in data regulation.”

She said that as well as detecting protected attributes, Watson OpenScale will “recommend which values within each attribute should be set as the monitored and the reference values”. So, for example, “within the “Sex” attribute, the bias monitor [would] be configured such that “Woman” and “Non-Binary” are the monitored values, and “Male” is the reference value.” Users will be able to edit the recommendations via the bias configuration pane.

Last year IBM launched what it called an AI Fairness toolkit, designed to help developers build “a comprehensive bias pipeline” comprising “a robust set of checkers, “de-biasing” algorithms, and bias explanations.”

Trying to resolve these ethical issues is a fraught process. Google’s attempt to put an Advanced Technology External Advisory Council in place lasted barely a week, after its own workers objected to some of the worthies it recruited to the panel.