Chatbots: A sorry excuse for failing the Turing finals

yitch
3 min readMay 4, 2016

--

Chatbots are now all the hype because the biggest companies in the world are doing it

there’s a lot of money to be made:
http://techcrunch.com/2016/04/19/telegram-encourages-devs-to-build-bots-with-1m-giveaway/

This feels like a classic play out of the Hollywood playbook to recycle older titles from the archives, slap on better CGI and resell the movie because it worked in the past.

The biggest challenge in order to move forward is to pass the Turing Test. The ultimate exam in the quest for AI. The most successful attempt to date has been Eugene Goostman. However, the assessors made so many assumptions it is as good as the professor giving the student a severe handicap.

AI needs to pass turing test before it becomes useful. Chatbots powered by existing AI are just nothing more than a 15 second magic show.

Why Turing test?

Personally, it’s easier to trust another human then to trust a machine. It’s easier to trust something that passes off as human (without knowing the true brains that give you the responses). It boils down to human nature and our ego I guess.

Artificial intelligence is more than the ability to speak and respond. It deals with our psychology. We have come a long way, but we are still limited by what we know concerning the subconscious of the human mind. The hardware that drives us is well studied. However the software still baffles us and is difficult to decipher.

What’s next?

For chatbots to become more human, There needs to be improvements in the following (I am writing this from an amateur standpoint, so if you are an expert, correct away!):

  1. Natural language and it’s nuances
    - Humor
    - Sarcasm
  2. Ability to read emotions for feedback
  3. Ability to understand human psychology and how to deal with different personalities

Can technology help?

Recent advances and once again backtracks towards deep learning is gaining traction and making headway into general purpose AI. This was demonstrated by Google’s AlphaGo.

Possible applications could include training for specific types of humor from different regions. (I always find it difficult to understand Australian humor and it’s really not the accent).

GPAI could help to extract features from training data tagged as funny for different people. Of course it would have to seek feedback to determine if it is learning properly. (I would love to pay for an AI comedian)

The work by Paul Ekman would be an amazing foundation for AI to learn features concerning micro expressions and what they mean. That dataset is nicely tagged (I’m just not sure if the data set is big enough to train with).

The most tricky part is human psychology. We know too little of ourselves to know how to qualify different individuals. Or if we should do so in the first place. It would be cool to have AI learn to interact with folks just by reading “What do you say after you say hello”.

It’s all a bubble (again)

Personally, I really think the whole chatbot thing is a bubble, until they can get them to sound 5000% better than the annoying automated recordings for customer service. Chatbots right now are just cheating by stating they are merely bots. To me, this is a mistake that companies will need to make to learn from and hopefully bring them down the path of intelligence similar to what Asimov painted decades ago.

--

--

yitch
yitch

Written by yitch

If you are enjoy a laugh at the expense of our corporate overlords, I hope my sense of humour is the cause

No responses yet