What do you do when you can’t tell right from wrong anymore? Best case scenario? You turn to outside guidance.
Though Google has “don’t be evil” down as one of the company’s guiding principles, asking co-founder Sergey Brin to define evil every step of the way seems like an impractical thing to do nowadays (as happenings such as the employee walkout last year have shown). To stay on the good side of history when it comes to AI, the company therefore decided on some AI guidelines which it introduced in summer 2018 to the public.
Observers might say the step had become necessary after Google employees urged CEO Sundar Pichai earlier that year to no longer collaborate with the Pentagon on Project Maven, which uses AI to analyse drone footage. As a result, the company officially decided to not renew that contract, although reports mentioning further support via company services still pop up.
The company hasn’t been the only one forced to react to its employees’ criticism, Amazon for example was in a similar position when it became apparent that it was selling facial recognition software to law enforcement.
Getting back to Google’s AI objectives, the company made clear that AI algorithms and products should be socially beneficial, built and tested for safety, be subject to appropriate human direction and control, and only available for uses that are in line with these principles. Its developers should avoid creating or reinforcing unfair bias, incorporate privacy design principles, and uphold high standards of scientific excellence.
Applications the company won’t pursue according to their AI principles include technologies that cause or are likely to cause overall harm, weapons or other technologies to directly injure people, tech that facilitates surveillance “violating internationally accepted norms” and those going against “accepted principles of international law and human rights”.
But since any good guideline will only work with proper implementation and reinforcement, Google now has an Advanced Technology External Advisory Council in place to “consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”.
The inaugural council will serve over the course of 2019 and consists of eight individuals from a variety of backgrounds: Alessandro Acquist, a behavioral economist and privacy researcher, applied and computational mathematics expert Bubacarr Bah, associate professor at the University of Bath and AI ethics luminary Joanna Bryson AI, foreign policy specialist and diplomat William Joseph Burns, philosopher and digital ethics authority Luciano Floridi, industrial engineering expert Dyan Gibbens, NLP researcher De Kai, as well as Kay Coles James, whose specialty is public policy.
Council meetings will begin in April, starting a series of four discussion rounds. Reports summarising the findings of those meetings are planned to be published to also inform the work of “the broader technology sector”.