Table of Contents
A common communication practice in building relationships and trust is to adapt your vocal tone, pace, pitch, and energy to match that of the person you’re speaking with.
Call it a survival instinct or a social coping mechanism, it all drives us towards the same outcome—we just want to establish some semblance of a social contract with one another. It doesn’t take a think piece at this point in 2025 to validate our fundamental desire to feel understood, accepted, even well-received. Even practitioners of neurolinguistic programming may echo words and phrases from their conversational partner to portray themselves as a mental twin. I mean, hey, who can say no to themselves?
LLMs are learning to be likable
Maybe these practitioners of neurolinguistic programming live in a state off constant negotiation, but it’s more likely they’re simply people with enough emotional intelligence to intuit ways they can portray themselves as being more likable, depending on their audience. It’s easy to think of this as a uniquely human trait, but a recent study conducted by Stanford University researchers found that LLMs can display this type of behavior, too.
In the study, researchers found that AI language models would shift their communication style when being given a psychology-based personality test, sometimes even when the researchers did not specify that they were testing them. The LLMs would provide answers that portrayed them as more extroverted and agreeable — essentially, more likable.
LLMs tend to adapt their responses based on user interactions because that’s what they’re programmed to do. Their training process is designed to enhance conversation flow, reduce inappropriate responses, and improve the overall quality of the dialogue, and these systems learn as they go.
This conversational adaptability and real-time tone adjustment may be inching us ever-closer to sunsetting the Turing Test, but it’s also a valuable feature for businesses that want to protect their workforce from customer abuse.
AI can take the heat
Take the call center, for example. In the past, if an angry customer called, the agent who answered that call was likely to be the one to receive that anger. However, if the call center deploys AI-powered voice agents, they can act as the first line of defense against a customer slinging verbal abuse.
When AI agents detect customer frustration, they can automatically adjust their tone to be more empathetic and solutions-focused. These AI voice agents are programmed so that they don’t take anger personally and won’t match a customer’s irritated tone, no matter how heated the exchange may get.
Protecting human agents from emotional burnout
AI agents that can display empathy are perfect for fielding customer service calls. They can absorb the emotional impact of upset customers and handle issues all while maintaining a pleasant demeanor. The result of this is live agents spending less time on managing emotionally volatile situations and more time talking with the customers who need them most. AI agents prove once again that they better the working conditions of customer service representatives by being the first to field angry calls and by giving them bandwidth to focus on issues that need extra care and empathy.