Platypus Header

Platypus Innovation Blog

18 November 2019

How Nick Bostrom's "Fable of the Dragon-Tyrant" is an elitist folly

I recently met Nick Bostrom's Fable of the Dragon-Tyrant via a rather charming animation by the enigmatically named CGP Grey. The moral of the fable -- and in case anyone misses it, Bostrom spells it out in his notes -- is that death is a terrible thing, and we should devote much more of society's output into researching an end to death i.e. longevity medicine.

This is initially appealing. I don't want to die; you don't want to die; nobody wants to die, and nobody wants their loved ones to die.

However if you look at Bostrom's fable as a proposal for change -- the folly becomes clear.

When a rich person dies before the age of 100, we can see that as a failure of medicine, and perhaps more R&D could help, and yes that would be good. But when a poor person dies before the age of 50 - that is more often a failure of Economics. So the greater problem by far is the economic and social challenges we face. Bostrom's Fable is a call for more R&D money and more focus on the needs of the rich, which is implicitly at the cost of the needs of the poor. The idea that R&D is not doing enough to look after the needs of the rich is, to put it simply, horseshit. What we need is more efforts directed toward the needs of everyone, and particularly economic and political change.

That is not to knock the huge value of R&D, and I speak as someone who is in the R&D field. But actually greater change could be delivered today and with greater certainty, through simply changing our emphasis to care more, for more people.

I am also concerned that longevity, especially longevity for the elite prioritised above a just society, might be a very bad thing. It could well pose an existential risk to freedom and justice.

The limit on the greedy and power-hungry has always been that, eventually, they too shall pass. To quote Death from Bill & Ted: "Whether you're a King or a street-sweeper, sooner or later, you dance with the Grim Reaper." (clip).

The transience of our lives is often cited as a reason for not being materialistic - epitomised in the phrase "You can't take it with you when you go". But what happens if you don't go? If you could live much longer -- or even think you could, given enough money? This might curb generous impulses, in favour of hoarding wealth for your own much longer and more costly needs.
Our mortality may be the source of our morality.
I don't want to die. But I'm not devoting my energy and surplus money into R&D towards an eternal Zuckerbergian Elite. In the end, the Fable of the Dragon-Tyrant is peddling visions of eternal life, in return for your money and obedience. That is and always has been a lousy deal.

22 July 2019

Fast Company innovation event says: Slow down!

"The real danger is not that computers will begin to think like men, but that men will begin to think like computers."
   - Sydney J Harris

Last week was Fast Company's inaugural European Innovation Festival -- launching an old world version of it's annual New York event. The theme was super-human technology: Innovation from technologies like AI and the increasingly blurry line between the real and digital worlds.

40,000 people in the US already have computer chips in their head.

Yuval Harari predicts some changes. Photo by Fast Company
If there was one consistent message from the dozen talks and panel sessions, it was a call to slow down the pace of disruption. Not because these technologies are bad, but they are so powerful they will reshape society -- and we need to consider our end goals.

Keynote speaker Yuval Harari gave a great talk, where he confidently predicts the end of Homo Sapiens as we know it, as AI, gene editing, and neural implant technologies come of age.[1] He is perhaps the only person who could describe bionic re-engineering of human bodies as a "conservative approach"! On the grounds that it would be a smaller change than a shift to computer-based life.

He was not making Terminator-style doomsday predictions: Harari sees the advances of technology as morally open. Evolution is a process, and current-day Homo Sapiens is not the end of it. Nor would the end of Homo Sapiens mean the end of humanity.

The worry is that AI and bio-hacking driven by raw competition has the potential to "downgrade humanity". E.g. by strengthening discipline and manipulation, at the cost of caring and creativity. We need global cooperation and wisdom to use technology for the benefit of humanity - now more than ever.

The recent development of a de facto arms race in AI between the US and China particularly concerned Harari, as the harsh us-or-them logic of an arms race could pull us towards the worst outcomes.

Putting some real-world flesh on Harari's vision of upgraded humans, neuroscientists Moran Cerf and Riccardo Sabatini gave an excellent session on "hacking the human mind" -- the potential for computers to link directly with the brain.

For many people, their online life is as real as their offline one, and their phone is already part of them. Cerf and Sabatini predict that this integration will start to be physical. The line is increasingly blurred, with for example, augmented reality, the extra digital senses of a fit-bit, and most sci-fi of all: direct brain-computer interaction.

40,000 people in the US already have computer chips in their head, as part of how their brains work.

These are mostly simple chips which release electric pulses to help alleviate epilepsy and other brain diseases. But more complex brain-computer integrations are already in use: Cochlea implants provide hearing to those who have lost it. A microphone sends electrical signals into the hearing region of the brain. Wonderfully, the brain learns to hear the new electrical impulses, growing new neural connections to interpret them.

What about a chip to allow the brain to make a Wikipedia or Google search, and hear the results internally. How far off is that?

Professor Cerf talked of his work in recording dreams, using electrodes implanted during brain surgery. This is real technology (if still crude in its outputs -- don't expect a video), being used in post-surgery therapy. The conference also demoed an easy-to-use baseball-cap device which can measure simple moods from brain signals.

Gucci CEO Marco Bizzarri and poet Shanelle Gabriel.
It was a glamourous and diverse conference. The event was hosted by Gucci at their HQ in Milan. Gucci's CEO Marco Bizzarri attended, a dapper giant. Film-star and rock-star Jared Leto added to the glitz.[3] Alongside the rock-star persona, there is also an impressive tech investor -- his portfolio includes Slack, Uber, Snapchat, Spotify, and Airbnb. The attendees were wonderfully diverse. Partly marking how AI has become mainstream, and partly due to Fast Company's design to create a stimulating cross-sector event, there were entrepreneurs, investors, artists, and HR people (amongst others). The event opened with a short music set[4], and a performance poet created live synopsis poems to end each session.

The recurrent theme was looking at how technology affects us. Ben Schwerin from Snapchat spoke well on how tech companies should take more responsibility for the mental effect of systems, and the importance of designing for user benefit, not just good user stats. He also called for more government involvement:
"[Social media] platforms have gotten so big and so powerful that it probably is not the healthiest thing to put that power in the hands of a few people who are motivated by running a for-profit business."
Although the technology discussed at the Fast Company event was new, the questions it raises are old ones. What do we value? What is important about being Human? and, How do we organise for the general good?

The questions may be as old as humanity -- but the emphasis has changed. Super-human technology has moved these questions from a matter of individual philosophy, to one for public policy.

[1] More on Yuval Noah Harari's talk 
[2] More on neuroscientists Cerf and Sabatini
[3] A toke of Jared Leto

A panorama of nice people, and me.

1 March 2019

Why I'm Giving This Talk (And not a Bot)

This is the talk notes and slides from a talk I gave at a Scotland Internet of Things workshop. My apologies for where the notes are incomplete.

Hello
Thank You

Let's start with me.

I'm Daniel Winterstein. I came to Edinburgh in 1999 to study Artificial Intelligence. It's a good city. It's a good subject.

I'm the Founder and CTO at Winterwell, we're a machine-learning consultancy. We make a product called SoDash, which is a social media tool, used by Harrods, Selfridges, Network Rail, and others.

We're pivoting to become Good-Loop, which is an ethical advertising and data-management platform.



Conversational UI - or "bots"


Why?


What if we're successful?



Someday, you're going to be sacked by a computer.

Which is convenient, as you'll presumably be able to get your P45 at the same time. The joined up process will be so smooth, it will be a bureaucrats wet dream. With cross-channel conversational follow-through and automated data-entry - It will make grown men weep.

Solution: Citizen's Wage / Basic Income



It's understandable to find this scary.

However, it's a sad reflection on the human condition that a life without hard or menial work scares us Imagine a life of pleasant contented happiness what a scourge on the face of the earth it would truly be... Douglas Adams' writing on the dolphins springs to mind.

Bots should deliver freedom from drudge work



Let's talk a bit about how today's bots go wrong, or make things worse.

Insincerity, Poor Etiquette, and Being Useless

These sins are not inherent to bots. 
Pushy sales-people and useless customer-service is not a new invention. 
But bots allow companies to be insincere, annoying, and useless at scale. 




I tried getting a bot to do the talk.   
Me: Hey Cortana, Could you help with my talk?
Cortana:
Me: Thank you Cortana
Cortana:

So that wasn't a success.



Let's look at another example. There's an anti-pattern emerging here: Bots shouldn't pretend to be human.

x.ai - brilliant idea: you want to schedule a meeting, you cc their bot, and it arranges the meeting.

Simple focused - Where does it go wrong?

It turns out even this really focused problem is surprisingly hard. They've been going 3 years, and they haven't cracked it yet. Right now, x.ai is only part AI; they also have teams of people processing messages. So in order for the bot to pretend to be human, they have people pretending to be bots.
This is not living the dream.

And the kicker: Doodle is a better service, inspite of being much simpler.
Because Doodle isn't confined by pretending to be human. So it can offer a user-interface that fits the problem.


Example emails

“Daniel, open this email for 12 people you should meet :)”
spam

“Re: Making Great Customer Experiences”
spam

If it's a sales message - don't pretend to be friends. If it's a cold email,
don't pretend we're having a conversation.

A simple test for if you want to deploy a chat bot: How would you feel as the recipient?

If the person you're talking to knew the full picture -- what's automated and what the goals are -- what would they think?
Would they be happy to receive fast service? Or annoyed at a pretence at caring?

We need a New Etiquette for Bots




Clippy - was intrusive. Though the modern web has bots that are worse.


Etiqutte and Sincerity are about how we as companies use bots. The solution is not technical - it's caring for our public.

Being Useless -- that is a technical problem.



Fear: That the bot will do more harm than good.

Quality: The bots can't deliver (yet).

Time/Cost: To learn a system, work out the common conversations,
and code them up.





Instead of programming the Bot. What if the bot learns from you?

We want bots to do repetitive tasks. If it's repetitive - there will be lots of examples for
them to learn form.

In general -- machine learning bots is hard, because communication is hard and needs a full
human understanding, and there is never enough data. But that's in general.

If you frame the task -- something specific, structured, and where failing to understand is OK
(bots should know when to stop and hand over gracefully) -- then it becomes possible.

15 February 2019

GPT-2: A new AI with amazing creative writing skills

This language AI is really amazing. Let's cut to the unicorns:

The AI was given a creative writing task: Given a prompt, and asked to continue writing the story. The example reported has been cherry-picked (i.e. the researchers filtered out the duds and are showing off the best output), but is still very very impressive.

Prompt:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
Model completion (machine written!)
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. 
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. 
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez. 
Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns. 
While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.” 
Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America. 
While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.” 
However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.
Wow.

And: the AI was entirely self-taught. It has a structure designed for sequence learning, which is then trained on the text from millions of web pages. It learns to predict the next word when reading. The impressive knowledge of language patterns and story structures shown above is all learned from the data.

As the researchers note, this level of AI has a lot of applications - good and bad. So they are not releasing the full model yet, asking the AI and wider society to consider how we manage this technology.

I read this yesterday. Still processing it with my jaw on the floor.

Naming things is an important part of humanising them, so the researchers have called this system GPT-2. See https://blog.openai.com/better-language-models/ for a summary of GPT-2 and a link to the technical paper. The neural net architecture is not given, but the paper and partial code suggest it may be surprisingly simple and generic, though large and expensive to train. Spoiler alert: it's not an LSTM - long short-term memory, the neural net architecture which has ruled NLP work for the last few years. It uses an attention-based short term memory in an encode-decode setup called a Transformer. Though attention functions do have some common ground with the memory-gates of an LSTM. So it's evolution not revolution. Except there's a point where evolution becomes revolutionary.

By Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever from OpenAI.com

Good-Loop Unit