Automatic speech recognition: systems + ASR biasing vs. bias in ASR

July 25, 2024

Share

Delivering excellent customer service over the phone starts with effective listening. A voice assistant must accurately hear and understand what the customer says before responding appropriately.

Achieving this requires various best-in-class technical capabilities that make interacting with AI-driven voice assistants feel natural and lifelike.

What is automatic speech recognition (ASR)?

Automatic speech recognition (ASR), also known as speech-to-text, is the technology that transcribes spoken language into written text. It’s the key to enabling voice assistants to accurately capture and understand what a caller is saying.

ASR systems rely on a blend of signal processing, machine learning, and natural language processing (NLP) to perform this crucial task. The transcribed text can then be digested by large language models (LLMs) and fine-tuned for specific accents, languages, and use cases, including handling customer service interactions and improving accessibility.

As ASR technology evolves, understanding its nuances becomes essential. For instance, distinguishing between ASR biasing—where the system is intentionally tuned to prioritize certain words or phrases—and bias in ASR, which can lead to inaccuracies or unfair treatment, is critical.

This awareness can help create more effective and accurate phone experiences, ultimately enhancing the overall user experience in phone-based conversations.

ASR biasing vs. bias in ASR

Successfully listening to customers and capturing spoken language over the phone requires accurate ASR systems.

Understanding the difference between ASR biasing and bias in ASR can help create more effective and accurate phone experiences and improve the overall user experience in phone-based conversations.

What is ASR biasing?

ASR biasing involves intentional techniques to improve the accuracy and relevance of speech recognition systems. It works by steering the system to better recognize particular words, phrases, or contexts.

For example, if you’re asking for someone’s order number, you can tell the ASR to listen out for numbers. If it hears “tree fork hate,” it can deduce that actually the caller probably said “348”.

This is particularly useful in scenarios where specific vocabulary, jargon, or names are common but not typically well-handled by general-purpose ASR models.

What is bias in ASR?

Unlike ASR biasing, which is a deliberate design to steer towards more accurate responses, bias in ASR happens unintentionally when the system works better or worse for some groups of people.

A system trained predominantly on callers with an English accent might perform poorly when speaking to a customer with an American accent, for example. Biases can also occur when handling:

  • Interactions with different genders
  • Callers with speech impairments
  • Regional dialects

These biases result in misinterpretation of words and higher error rates. In a contact center, these errors can frustrate customers because they have to repeat themselves, or they might not be able to use automated systems to resolve their issues.

How to overcome bias in ASR

ASR biasing is crucial for achieving accurate speech recognition and efficient customer service. It’s also essential for avoiding offensive and discriminatory behavior. Applying the following techniques can help to reduce the likelihood of bias in ASR.

Diverse and audited datasets

Ensure that the data used for training is carefully reviewed and representative and that the model training processes are robust against biased information.

  • Implement guardrails: Set boundaries on the topics a voice assistant can address. This ensures the assistant stays on topic and only uses its trained data to respond effectively. Without these AI guardrails, there’s a higher risk of inappropriate responses, such as swearing or offensive jokes, which can damage your brand reputation.
  • Continuous monitoring: Biases and discrimination are ongoing challenges, similar to safety and security issues. To stay updated, it’s important to listen to calls regularly. These sessions help you continuously monitor real interactions, allowing you to respond quickly and make necessary updates to your system.

 

Create accurate and human-like customer service with PolyAI

By implementing diverse datasets, setting guardrails, and continuously monitoring interactions, companies can mitigate bias in ASR, to create accurate and effective interactions for all customers.

The machine learning teams at PolyAI rigorously test numerous ASR systems, often on multiple models, and applied spoken language understanding (SLU) principles to improve transcription accuracy. This significantly improves the accuracy of speech recognition in real customer phone calls.

The PolyAI tech stack enables voice assistants to understand alphanumeric inputs accurately and people from different places, with different accents, and in noisy environments to deliver efficient customer service over the phone.

We recently deployed the first-ever Croatian-speaking enterprise voice assistant that routes callers using natural language and answers FAQs for Zagrebačka banka.

Improving NPS by 14 points with a Croatian voice assistant for UniCredit

Read the case study

Ready to enhance your customer service over the phone? Discover how PolyAI’s customer-led voice assistants can make a difference for your contact center.

Request a demo with our team today!

Automatic speech recognition FAQs

An ASR system typically includes three key components:

  • The acoustic model, which interprets sound waves into phonetic units.
  • The language model, which helps predict word sequences based on context.
  • The pronunciation model, which maps phonetic units to words.

Together, these components and algorithms enable the system to convert spoken language into text with a high level of accuracy.

Examples of artificial intelligence-powered ASR systems include:

  • Google Speech-to-Text
  • Amazon Transcribe
  • Microsoft Azure Speech

These platforms are widely used for various applications, from transcription services to voice-controlled interfaces, and they form the backbone of many modern voice-activated technologies.

Automatic speech recognition technology converts spoken language into text, enabling systems to understand and process human speech.

Voice recognition, on the other hand, identifies and verifies the identity of a speaker based on their voice.

While ASR focuses on what is being said, voice recognition is concerned with who is saying it.

Yes, modern ASR systems can recognize multiple languages. Many advanced systems are designed to handle a wide range of languages and dialects, allowing them to serve diverse user bases across different regions and cultures.

This capability is crucial for businesses that operate globally and need to interact with customers in their native languages.

Ready to hear it for yourself?

Get a personalized demo to learn how PolyAI can help you
 drive measurable business value.

Request a demo

Request a demo