Rocketbots CEO and Chairman of the AI Society of Hong Kong, Gerardo Salandra, spoke to an eager crowd at the World AI Show in Singapore on the topic of Why Chatbots Fail.
Rocketbots, founded by Salandra, started off as a chatbot agency in Hong Kong that’s now turned into an AI-powered help desk platform. It was Rocketbot’s initial start as an agency that’s given plenty of insights into the chatbot industry, mainly the faults of chatbots, and where the faults come from technologically, and it’s impact on business’s perception of AI. The first of Gerardo’s points come from the pains of Chatbots, their constant failures. It’s almost certain every bot-human interaction has an inevitable failure at some point, and this is still par for the course for AI development on Retrieval based AI. Retrieval based AI is hardly AI, says Salandra, as it uses simple decision trees to create conversations and uses a computer’s speed to make it seem like it’s smart. Learning in Retrieval based AI comes from data input and machine learning to perfect an imperfect decision tree. Poor training, poor data, unplanned conversation strains, and coloquial human language are major reasons behind a chatbot’s failure.
Enterprise-level businesses have invested major funds into leveraging chatbots to their benefit, especially when it comes to customer interactions to cut down on customer support staff costs. However the failures of chatbots have left a bad taste is customer’s mouths, and thusly the mouths of the companies who invested time and money into a hardly-effective solution. This kind of insight has led Rocketbots to shift to a customer communication platform SaaS model, and a new method of AI, Neural Networked AI. Neural Networks learn like humans do, positive input reinforces decisions, and negative input keeps it from learning poor choices. A model like this does indeed solve a majority of the issues that cause chatbot failure for Retrieval based chatbots, but it does come with it’s own faults as well.
An example that Salandra brings up is the failure of Microsoft’s AI Tay, who learned from social media interactions in a public demonstration. Tay had learned from all data provided to it, which unfortunately ended in an AI that had opinions and statements that were offensive, racist, and profanity laden. So while this method of learning proved to be effective, it shows that Neural Networks need to be properly nurtured like a small child, so it learns what’s best. It will be a matter of time before AI can be fully automated for human-to-bot interactions, but with effective nurturing and goal-setting, there’s reason to believe that conversations could be automated to certain extents.