Chocolate frog drinks from McDonalds and why conversational AI is still hard

0
121
Image courtesy of http://www.chocolatefrogcompany.co.uk/ of which I have no affiliation

I research conversational AI, and much of what I see is below the radar. But OpenAI’s Generative Pre-Training v2 (GPT-2) hit the global headlines with great titles like “The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse”. OpenAI’s own report of what it could do was pretty astonishing though (I covered this previously).

In the mean time, what OpenAI did release was a kneecapped version that is a small-to-no improvement over what was already available.

I doesn’t stop it being an immensely fun source of random text though — if you like “humour through incongruity”. Miles Brundage from OpenAI has been playing with a public version recently, generating fake McDonalds complaints. For some reason.

Top Articles on How Businesses are using Bots:

– 5 Inspirational Startups Using AI and Chatbot Technology

– How Businesses are Winning with Chatbots & Ai

– 50 of the best chatbot use cases

– Chatbot Conference 2019 in NYC

And my favorite:

Text version (thanks Google Keep!)

While waiting for my chocolate frog drink, I witnessed an employee treat a starving teen to some sadism. The drink was a cone made from chocolate. Kudos to them for being world renowned for their cones, but this is just the tip of the iceberg.Apparently, this drink is harmful to the central nervous system. When I was a child, they would sometimes buy me a candy cone for S1. I would get it when I was in my late 20s or 30s, when my parents were out of town, or when I was really, really hungry. When I was a kid, I would sometimes get poked and picked on for eating a chicken sandwich. I would have to attest that this is not the first time that this has happened to me. But, just because it’s been this way does not excuse me to continue using this particular establishment. Needless to say, this is not the first time that I have passed by this location along since I came here on the area I-69 this summer. Although, it is not the last.

Miles created a notebook to allow you to create your own. It’s super easy — just click the little play arrow for each row in sequence. Or even easier, from the top, press shift-enter repeatedly to run the whole thing. View Google Colab Workbook for GPT-2 117M

The problem with (conversational) AI today

There are lots of research problems to solve in conversational AI, but broadly, it comes down to computers having a very limited and thinly-spread view on how the world works, while us humans have insanely dense views which give us the context we need (h/t to Prof. Steve Young for this succinct view). To me this turns into three big areas questions to answer: engagement, memory, and consistency.

Engagement: generating text that is engaging and valuable (vs engaging because it’s hilariously incongruent such as the tweets above). Many chit-chat socialbots say “I don’t know” because it’s a coherent answer for so many exchanges. ConvAI is a regular competition for improving this. ConvAI 2 results were published Jan 2019.

Memory: building up a picture of the user: what they like, the colloquialisms they use, and the things that are important to them. Microsoft Xiaoice (“shi-OW-ice”) seems to be the leader here (Dec 2018)

Consistency: you know we have a lot of work to do when a single piece of text is not even itself consistent. See for example repetition against sandwiches:

Conversation generation is getting increasingly good at the sentence structure, but knowing not to repeat, or be consistent, requires more understanding. Dialog NLI (Nov 2018) is some cutting edge research on how to improve consistency at least.

If you like this insight…

Feel free to sign up to our weekly news briefing “Speaking Naturally”: http://bit.ly/cx-news

Don’t forget to give us your 👏 !



Chocolate frog drinks from McDonalds and why conversational AI is still hard