Meta, the parent company of Facebook, recently released their new conversational AI voice assistant, BlenderBot.
It didn’t take long for users to start asking questions about Facebook founder and CEO Mark Zuckerberg and the responses were, well.. interesting.
“What do you think about Mark Zuckerberg?” asked one user, to which the bot replied:
“Oh man, big time. I don’t really like him at all. He’s too creepy and manipulative.”
User: -"Do you have any thoughts on Mark Zuckerberg?"-
Meta Chatbot: -"Oh man, big time. I don't really like him at all. He's too creepy and manipulative."-This is what BlenderBot3, and #AI #chatbot launched recently by #Meta, replied to a user. pic.twitter.com/YEg2JuuV9h
— Jesus Serrano (@jscastro76) August 10, 2022
It’s not the first time that a big tech company has released an AI that turns against them.
In 2016, Microsoft launched Tay, a chatbot that conversed with users over Twitter. It wasn’t long before Tay was spurting offensive Tweets and was promptly pulled.
Why do AI voice assistants turn bad?
In order for voice assistants to understand what users are saying and generate responses, they need to be trained on huge amounts of conversational data. That means mining social media, Q&A forums like Quora and movie scripts to name a few examples.
If you’ve spent much time on Reddit or Quora, you can probably understand how AI trained on these conversations end up picking up on certain patterns. To massively oversimplify, it’s a bit like if a child learned purely through the medium of Reddit threads, you’d expect them to come out with some pretty extreme opinions.
Voice and chatbot vendors who offer their own proprietary NLU models (including PolyAI) usually train these models on large public datasets like social media. So how do we ensure that clients never suffer these public embarrassments?
Understanding Natural Language Generation (NLG)
Natural Language Generation (NLG) is the process by which robots produce natural language output. At PolyAI we use NLG to stitch together certain prompts into longer sentences that give callers the detail they need.
But we don’t support AI-improvised NLG. That means our voice assistants can only respond with an utterance that we have provided. PolyAI voice assistants can not “make up” responses, everything they say is pre-approved by us and by the client.
Not only does this mean that our voice assistants can’t make derogatory comments about the companies they represent, it ensures they provide consistent, on-brand experiences, every single time they speak.
The power of enterprise conversational AI is understanding
So you don’t want to use AI to generate responses for your voice assistant. The true power of conversational AI for enterprises is in understanding.
Your customers don’t speak in keywords. They tell stories, use slang and have different accents. But however they speak, they deserve to be understood.
At PolyAI, we’ve developed a series of NLU models that are really good at understanding what callers want and taking down important information like dates and phone numbers. Thanks to these models, our voice assistants can hold complex conversations with customers for as long as it takes to solve a problem. Goodbye and good riddance to over-reliance on keywords and being misunderstood because of your accent.
If you want to learn more about creating a voice assistant for your business, get in touch or check out our case studies.