Reduce call abandonment in your contact center with voice AI Read more

Do generative AI platforms lack adequate safety guardrails?

July 1, 2024

Share

Security blog

Last week, Wired called out Bland AI—a new conversational AI startup—for offering “robot customer service callers [that] could easily be programmed to lie and say they’re human.” In Wired’s test, Bland AI’s public demo bot claimed to be a human and asked a 14-year-old patient to send photos of her inner thigh.

This is extremely concerning for enterprise buyers who are considering implementing AI, specifically generative AI, in customer-facing roles. It once again raises the question – is generative AI safe for enterprise use?

The answer is not straightforward. Yes, generative AI can be leveraged safely and responsibly in enterprise use cases, but buyers should not currently rely solely on DIY “low code” platforms.

These platforms typically offer a user interface (UI) that ‘wraps around’ an existing large language model (LLM). This enables buyers to create their own voice assistants or chatbots from scratch. However, the productization of safety guardrails is still in its early stages. While buyers can license incredibly powerful LLMs, the tools required to prevent prompt injections and hallucinations still require a high degree of fine-tuning.

Generative AI offers a high degree of flexibility that simply wasn’t possible with intent-based models. This means enterprises can offer automated conversations in a way that does not require users to alter their everyday speech patterns.

The promise of generative AI hasn’t gone unnoticed. C-suites and board members are keen to jump on the bandwagon and invite teams to explore its potential. But as the headlines about misbehaving AI continue to drop, how can teams ensure that their solutions don’t fall prey to hallucinations and prompt injections?

Working with generative AI in an enterprise setting

When working with generative AI, enterprises must take safety guardrails seriously. We’ve covered safety guardrails on the PolyAI blog and podcast, and a quick Google search will return thousands of articles outlining how machine learning scientists are approaching the topic.

What’s important to remember is that these guardrails are highly specialized and must be fine-tuned for your specific use cases. If you’re working with a low-code platform, it’s important to understand the degree of visibility and customization available to ensure safe, customer-friendly conversations.

When working with a conversational AI vendor, you should have a choice over what behavior you would like your AI to have, from general behavior to fine-grained details. If a vendor can’t do that for you, they are putting your brand in the hands of a generic generative AI model and putting your brand at risk.

A purely generative approach offers superior conversational flows but is more susceptible to prompt injections and hallucinations. An intent-based approach misses out on the flexibility of generative AI, but is rigid enough that the conversation simply can not be derailed. You can even use a combination of generative and intent-based models for different inquiries or transactions within the same call.

Generative AI safety at PolyAI

PolyAI has deployed a number of generative-AI powered voice assistants that handle customer service calls for some of the world’s best-known brands. We have robust safety frameworks in place that are fine-tuned for each customer’s requirements and tested rigorously to remove risk before deployment.

To learn more about deploying generative AI, without the risk, get in touch with PolyAI today.

Ready to hear it for yourself?

Get a personalized demo to learn how PolyAI can help you drive measurable business value.

Request a demo

Request a demo