Platypus Header

Platypus Innovation Blog

1 March 2019

Why I'm Giving This Talk (And not a Bot)

This is the talk notes and slides from a talk I gave at a Scotland Internet of Things workshop. My apologies for where the notes are incomplete.

Hello
Thank You

Let's start with me.

I'm Daniel Winterstein. I came to Edinburgh in 1999 to study Artificial Intelligence. It's a good city. It's a good subject.

I'm the Founder and CTO at Winterwell, we're a machine-learning consultancy. We make a product called SoDash, which is a social media tool, used by Harrods, Selfridges, Network Rail, and others.

We're pivoting to become Good-Loop, which is an ethical advertising and data-management platform.



Conversational UI - or "bots"


Why?


What if we're successful?



Someday, you're going to be sacked by a computer.

Which is convenient, as you'll presumably be able to get your P45 at the same time. The joined up process will be so smooth, it will be a bureaucrats wet dream. With cross-channel conversational follow-through and automated data-entry - It will make grown men weep.

Solution: Citizen's Wage / Basic Income



It's understandable to find this scary.

However, it's a sad reflection on the human condition that a life without hard or menial work scares us Imagine a life of pleasant contented happiness what a scourge on the face of the earth it would truly be... Douglas Adams' writing on the dolphins springs to mind.

Bots should deliver freedom from drudge work



Let's talk a bit about how today's bots go wrong, or make things worse.

Insincerity, Poor Etiquette, and Being Useless

These sins are not inherent to bots. 
Pushy sales-people and useless customer-service is not a new invention. 
But bots allow companies to be insincere, annoying, and useless at scale. 




I tried getting a bot to do the talk.   
Me: Hey Cortana, Could you help with my talk?
Cortana:
Me: Thank you Cortana
Cortana:

So that wasn't a success.



Let's look at another example. There's an anti-pattern emerging here: Bots shouldn't pretend to be human.

x.ai - brilliant idea: you want to schedule a meeting, you cc their bot, and it arranges the meeting.

Simple focused - Where does it go wrong?

It turns out even this really focused problem is surprisingly hard. They've been going 3 years, and they haven't cracked it yet. Right now, x.ai is only part AI; they also have teams of people processing messages. So in order for the bot to pretend to be human, they have people pretending to be bots.
This is not living the dream.

And the kicker: Doodle is a better service, inspite of being much simpler.
Because Doodle isn't confined by pretending to be human. So it can offer a user-interface that fits the problem.


Example emails

“Daniel, open this email for 12 people you should meet :)”
spam

“Re: Making Great Customer Experiences”
spam

If it's a sales message - don't pretend to be friends. If it's a cold email,
don't pretend we're having a conversation.

A simple test for if you want to deploy a chat bot: How would you feel as the recipient?

If the person you're talking to knew the full picture -- what's automated and what the goals are -- what would they think?
Would they be happy to receive fast service? Or annoyed at a pretence at caring?

We need a New Etiquette for Bots




Clippy - was intrusive. Though the modern web has bots that are worse.


Etiqutte and Sincerity are about how we as companies use bots. The solution is not technical - it's caring for our public.

Being Useless -- that is a technical problem.



Fear: That the bot will do more harm than good.

Quality: The bots can't deliver (yet).

Time/Cost: To learn a system, work out the common conversations,
and code them up.





Instead of programming the Bot. What if the bot learns from you?

We want bots to do repetitive tasks. If it's repetitive - there will be lots of examples for
them to learn form.

In general -- machine learning bots is hard, because communication is hard and needs a full
human understanding, and there is never enough data. But that's in general.

If you frame the task -- something specific, structured, and where failing to understand is OK
(bots should know when to stop and hand over gracefully) -- then it becomes possible.

Good-Loop Unit