IBM whips out tool kit to build a “comprehensive bias pipeline” in AI and ML workflows

IBM whips out tool kit to build a “comprehensive bias pipeline” in AI and ML workflows

IBM has thrown its considerable weight behind efforts to detect and eliminate bias in AI and machine learning systems, enabling devs to comply with ethical and legal requirements. Or just be more fair.

“The application of AI algorithms in domains such as criminal justice, credit scoring, and hiring holds unlimited promise,“ Aleksandra Mojsilovic, IBM Research Fellow; Angel Diaz, Vice President, Developer Technology and Advocacy; Ruchir Puri, Chief Architect, IBM Watson, wrote in a blogpost last week. “At the same time, it raises legitimate concerns about algorithmic fairness.”

“And we need to remember that training data isn’t the only source of possible bias,” they continued. “It can also be introduced through inappropriate data handling, inappropriate model selection, or incorrect algorithm design. Bias can also affect usage data.”

Mojsilovic et al said that not all bias might be illegal or even unethical. Rather that it can exist in subtle ways that would still be “undesirable” for a company’s strategy. Or indeed for the well-being of individuals and society at large, one might add.

So what is the answer? “A comprehensive bias pipeline” comprising “a robust set of checkers, “de-biasing” algorithms, and bias explanations.”

This comes in the shape of IBM’s AIF 360 tool kit, which includes an Open Source library and Jupyter notebooks, and encompasses 30 fairness metrics and 9 bias mitigation algorithms. We’re guessing the name is a riff on the old IBM 360 mainframe, not a suggestion that you’re allowed five days worth of bias a year.

According to the authors, “The toolkit’s fairness metrics can be used to check for bias in machine learning workflows, while its bias mitigators can be used to overcome bias in a workflow to produce a more fair outcome.”

You can download the AIF360 tool kit here, and work though a number of Jupyter notebooks, such as a Gender classification of face images tutorial….which ultimately will allow you to learn a new classified and “obtain updated fairness metrics.”

You can also use IBM’s hosted web app to work through a number of data sets to explore for bias.

Thirdly, IBM has added a new code pattern – covering loan decisions –  to its collection of AI and data analytics code patterns and has promised more soon. And if you feel something is missing, IBM invites you to contribute to the project.

Bias is one of the big issues facing AI and ML developers, as it becomes clear that projects can go awry in unexpected ways from the moment you introduce inadvertently skewed data through to how systems, organisations or individuals act on the insights derived from that data.

However, they’re not the only ethical dilemmas facing developers in ML and AI. Google and Microsoft engineers demonstrated this earlier this year when they flagged up concerns about some of the government related projects their employers have involved them with, such as the US Department of Defense and US immigration authorities, with some engineers moved to resign because of the contracts.