Microsoft chatbot guidelines aim to keep talk online, on pizza, and off politics

Microsoft  chatbot guidelines aim to keep talk online, on pizza, and off politics

Microsoft has launched a top ten of guidelines for building a chatbot, so that developers don’t get themselves into trouble by creating online personas that take a pizza order and turn it into a screaming match on race, religion, or God forbid, Brexit.

As developers deploy online personas that interact with humans, and even make decisions, the potential for reputational damage alone is immense – as is the potential for hacking off customers who just want to speak to someone who can help them.

The chatbot guidelines were unveiled in a blog post by Microsoft’s VP for conversation AI, Lili Cheng, who said they were based on what the firm had learned through its own work and from customers and partners.

She said they were “just our current thoughts; they are a work in progress. We have more questions than we have answers today.”

On the face, the recommendations are largely commonsensical, starting with the advice to “Articulate the purpose of your bot and take special care if your bot will support consequential use cases”.

Ensuring reliability, and that the bot handles data securely seem straightforward enough, while ensuring accessibility is surely a good thing.

Developers are advised to systematically assess their training data, and their own team’s diversity, to ensure fairness.

So far, so standard, but other items on the list throw up trickier ethical questions. The second nostrum is to “be transparent about the fact that you use bots as part of your product or service”. It’s often impossible to be sure if your IM conversation with a retailer actually involves a real person. However, many machine ethicists say it should always be clear when you’re interacting with a robot – whether online or in more real world settings.

This segues into Microsoft’s advice to “Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.” The company suggests that “If users feel trapped or alienated by a bot, they will quickly lose trust in the technology and in the company that has deployed it.”

The next item is perhaps the most tricky of all, “Design your bot so that it respects relevant cultural norms and guards against misuse.” Developers are advised to “limit the surface area for norms violations. For example, if your bot is designed to take pizza orders, limit it to that purpose only, so that it does not engage on topics such as race, gender, religion, politics and the like.”

More tangibly, they are advised to “Apply machine learning techniques and keyword filtering mechanisms to enable your bot to detect and — critically — respond appropriately to sensitive or offensive input from users.”

Unfortunately, it’s hard enough to get humans to stick to the rules – and a digital persona can suck up an awful lot of info awfully quickly, meaning less scrupulous types could turn it to the bad very quickly.

Microsoft itself fell foul of many of these recommendations with its own experimental chatbot, Tay. The twitter chatbot was designed to mimic the persona of a young woman…and was wildly successful assuming you think it only takes 14 hours to turn the average twenty something woman into a Trump supporting hate-speaking pottymouth.

Microsoft quickly put Tay “to sleep”. In fact, she’s still so asleep that while Cheng name-checked Microsoft’s Cortana and Zo, Tay was not mentioned at all in Cheng’s post.