Reduce call abandonment in your contact center with voice AI Read more

“He’s too creepy and manipulative”: What enterprises can learn from Meta’s BlenderBot

August 22, 2022

Share

Meta, the parent company of Facebook, recently released their new conversational AI voice assistant, BlenderBot.

It didn’t take long for users to start asking questions about Facebook founder and CEO Mark Zuckerberg and the responses were, well.. interesting.

“What do you think about Mark Zuckerberg?” asked one user, to which the bot replied:

“Oh man, big time. I don’t really like him at all. He’s too creepy and manipulative.”

It’s not the first time that a big tech company has released an AI that turns against them.

In 2016, Microsoft launched Tay, a chatbot that conversed with users over Twitter. It wasn’t long before Tay was spurting offensive Tweets and was promptly pulled.

Why do AI voice assistants turn bad?

In order for voice assistants to understand what users are saying and generate responses, they need to be trained on huge amounts of conversational data. That means mining social media, Q&A forums like Quora and movie scripts to name a few examples.

If you’ve spent much time on Reddit or Quora, you can probably understand how AI trained on these conversations end up picking up on certain patterns. To massively oversimplify, it’s a bit like if a child learned purely through the medium of Reddit threads, you’d expect them to come out with some pretty extreme opinions.

Voice and chatbot vendors who offer their own proprietary NLU models (including PolyAI) usually train these models on large public datasets like social media. So how do we ensure that clients never suffer these public embarrassments?

Understanding Natural Language Generation (NLG)

Natural Language Generation (NLG) is the process by which robots produce natural language output. At PolyAI we use NLG to stitch together certain prompts into longer sentences that give callers the detail they need.

But we don’t support AI-improvised NLG.  That means our voice assistants can only respond with an utterance that we have provided. PolyAI voice assistants can not “make up” responses, everything they say is pre-approved by us and by the client.

Not only does this mean that our voice assistants can’t make derogatory comments about the companies they represent, it ensures they provide consistent, on-brand experiences, every single time they speak.

The power of enterprise conversational AI is understanding

So you don’t want to use AI to generate responses for your voice assistant. The true power of conversational AI for enterprises is in understanding.

Your customers don’t speak in keywords. They tell stories, use slang and have different accents. But however they speak, they deserve to be understood.

At PolyAI, we’ve developed a series of NLU models that are really good at understanding what callers want and taking down important information like dates and phone numbers. Thanks to these models, our voice assistants can hold complex conversations with customers for as long as it takes to solve a problem. Goodbye and good riddance to over-reliance on keywords and being misunderstood because of your accent.

If you want to learn more about creating a voice assistant for your business, get in touch or check out our case studies.

Ready to hear it for yourself?

Get a personalized demo to learn how PolyAI can help you
 drive measurable business value.

Request a demo

Request a demo