PolyAI featured in GARTNER® HYPE CYCLE™ for CRM solutions Read the report

How not to talk like an LLM

August 21, 2025

Share

LinkedIn is abuzz with content claiming to expose the hallmarks and “giveaways” of LLM-generated text. As a lifelong lover of the em-dash, I’ve found myself in recent weeks grieving the loss of this most useful punctuation mark, whose reputation is now undeniably tarnished—hopefully not permanently—by its associations with ChatGPT and AI-generated content.

Likewise, we are told that the overuse of words like “delve”, “innovative”, “profound” and “intricate” is also a kind of linguistic shibboleth (a bit of language usage that can identify the user as part of one group or another) for large language models.

Most of this advice about which words to avoid for fear of sounding like AI is geared toward long-form written content like articles, presentation scripts, and, well, blog posts; however, there are also tell-tale signs of LLM-output in collaborative dialogue, like our conversations over the phone or text message. We’ll get into those in a minute, but first let’s back up for a sec…

Why should we care about not sounding like an AI

Anyone who has used ChatGPT knows it can be incredibly useful. Those linguistic quirks like overusing em-dashes or too frequently “delving” into the nuances of an issue are actually side effects of its usefulness. ChatGPT is trained to be an all-purpose text generator, striking a tone and embodying a persona that is most broadly helpful and applicable. This means being somewhat impersonal and formal in its sentence structure—hence all the em-dashes—as well as generally neutral and noncommittal in its word-choice so as not to seem too flashy, flamboyant, or opinionated.

In general, when people worry about their long-form text seeming AI-generated—particularly in non-artistic prose—they are worried about being perceived as lazy for not having written it themselves or that might seem like the ideas they present aren’t truly their own, not that AI-generated text is inherently unclear or uninformative (potential for hallucinations aside).

However, when it comes to collaborative, task-based dialogue, like a phone conversation with a customer service representative, the very things that make ChatGPT helpful for long-form text generation are harmful for fostering the sense of Social Presence that is crucial when navigating a task through the medium of spoken conversation.

Social Presence Theory

Since long before the days of large language models, folks in the field of Human-Computer Interaction (HCI) were talking about Social Presence, which is the feeling of “being with another.” Research consistently shows that an increased sense of Social Presence when engaging with a virtual agent is correlated with all kinds of positive outcomes for both the business and the end user whether or not the user actually knows it’s a robot on the other end. When Social Presence increases:

  • Users feel a greater sense of trust that the information they are providing will be handled securely and sensitively
  • Users are more likely to adhere to the advice given by the agent, resulting in positive business outcomes and more straightforward problem solving for the user with less back-and-forth
  • Users feel a greater satisfaction with the outcome of the interaction, which contributes greatly to customer loyalty

Unfortunately, a lot of the attributes of ChatGPT that were designed to make it the most usable—its formality, neutrality, verboseness, over-explaining everything in great depth—are super effective at detracting from a sense of Social Presence. That’s why, in the age of LLMs, the Conversation Designer is as important as ever when it comes to structuring interactions in ways that avoid these pitfalls, and wrangling your large language model through clever prompting and guardrails into better linguistic habits. Let’s see what that looks like:

Designing for Social Presence

1. Respect the customer’s time: Be concise

Customers typically want to spend as little time on the phone with support as possible. They want a collaborative partner in accomplishing their specific task, not a loquacious salesman who seems like they’re reading from a script and isn’t really with them. Often, less polished is the way to go:

Overly structured and formal

Relaxed and unscripted

“Our exclusive Birthday Bonus Rewards Program offers you the opportunity to enjoy a complimentary decadent dessert on your special day at any of our locations! Would you like me to enroll you so you can start celebrating in style?”

“I can also add your birthday to our rewards program so you get a free dessert next time you’re here. Would you like me to do that?”

2. When it pays to be wordy

Concise speech is best in most situations, but when clarity is important, there is an exception: when you need to explain something complicated or detailed. Unlike in a chatbot setting, where a customer can re-read what is in front of them, someone calling doesn’t want to have to ask to have the information repeated.

In these cases:

  • It’s important to slow speech down
  • Being slightly more detailed, repetitive, or wordy can help the listener understand better

This is especially important if the customer has specifically asked for an explanation or they need extra context to understand and process the information. For example, if a user asks, “Can you explain what the SNAP program is?”

Bad practice

Best practice

”SNAP is a food assistance program for low-income households. Want me to send you a link with more info?”

“SNAP, or Supplemental Nutrition Assistance Program, is a federal program that helps eligible low-income individuals and families buy food. The amount you receive depends on your income, household size, and expenses. You can use the benefits at many grocery stores and some farmers' markets. Would you like me to send you a link with step-by-step info on how to apply?”

Research shows that, in addition to slowing down the actual pace of speech, being somewhat more verbose or repetitive in your explanations can allow the user more time to process what is being said; using this technique is another way of seeming more present and accommodating of the real person you’re talking to.

3. Avoid overly formal or scripted language

I know what you’re thinking: these rules sound contradictory. “Don’t say too much.” “Make sure you say enough.” That’s exactly why human-machine conversation design takes skill. It’s about knowing when brevity serves the customer and when detail builds understanding.

LLMs often miss that nuance, defaulting to verbose, overly formal phrasing that feels cold or unfamiliar, again detracting from that sense of Social Presence.

Bad practice

Best practice

”Could you please provide me with your account number”

”Could you tell me your account number, please?”

”How may I assist you today?”

”How can I help?”

4. Don’t over-explain

Many large language models have a deeply ingrained tendency to:

  1. First explain the reasoning for a question or request
  2. Then make the actual request with reference to the reasoning provided.

For example, first, the LLM might say:

“In order to check for outages, I’ll need to look up your account. Could you tell me your account number?”

and then a couple turns later:

“I’ve found your account. To verify your identity, I’ll need your birth date. Could you please provide me with that?”

LLMs are trained to do this because we want their actions to be as predictable and transparent as possible. Including these explanations or overt bits of reasoning in the text they return can improve their adherence to the business-logic at hand and also helps the user of, say, ChatGPT, to better understand the relationship of their prompt to the output.

The problem is, when humans are engaging in natural task-based conversation, there is a lot of shared context from our prior interactions and shared understanding of the task at hand that is implicit in everything we say. When two interlocutors have a mutual goal (for example, an agent and user verifying the user’s account so they can pull up a copy of their bill), there is a shared assumption that any question asked by the agent is necessary for achieving that goal.

When all of those normally implicit bits of context are made explicit on every turn, it makes it feel like there isn’t anyone actually there with you, sharing in that context and mutual goal.

Instead of following the pattern of:

[explanation of why I need something] [request for the thing]

99% of the time, all of the important information and the request itself can be formed into a single interrogative sentence, even if some explanation is required:

”No problem, what’s your account number?”

”Great, and just to verify that I have the right account here, could you tell me your date of birth?”

Both of these examples are asking for information clearly, but they still do it differently:

The first one is direct: just a straightforward question with no extra explanation. It’s obvious why the agent is asking for an account number, since we need to look up their account.

The second one adds a short explanation as part of the question: This makes it feel a bit more polite and clear, without being wordy.

The Syntax of Social Presence

In everyday conversation, the way people speak often reflects the situation, what’s already been said, and who they’re talking to. These patterns help make interactions feel natural and collaborative, rather than stiff or scripted, and contribute to that all-important sense of Social Presence. Below are some examples in English that human agents and dialogue designers commonly use, but that large language models don’t always produce naturally.

Some of these examples depend on context or dialect and may not work in every situation. They’re meant as inspiration rather than strict rules.

Description

What does it do

Instead of this

Why it matters

Why it matters

Present Progressive vs. Plain Present

Gives a sense of active collaboration; avoids overly confident assertions

“I don’t see any accounts under that phone number…”

“I’m not seeing any accounts under that phone number…”

Signals ongoing effort, allows room for error

Indexicality and Presupposition

References shared conversational context without overt repetition

“Since you said you prefer weekends, how does Saturday at 2:30 sound?”

“How about Wednesday instead?” / “In that case, how does Saturday at 2:30 sound?”

Avoids LLM-style over-explicitness; maintains natural flow

Ethical Datives / First Person Datives

Speaker included as an indirect object even when not strictly necessary

“Could you read your account number aloud?”

“Can you log into your account for me?” / “Could you read me your account number?”

Creates a sense of involvement and rapport. The speaker is implicated in the events of the sentence even if not a direct participant.

Face-saving Past Tense

Mitigates face-threatening acts by placing requests or desires to past

“When are you trying to come in?”

“When were you trying to come in?” / “When did you want to come in?”

Softens potential conflict or imposition; more polite

Nurse ‘We’ Construction

Speaker says “we” instead of “you”, implicating self in event without being the actual agent/subject

“Do you wanna look for other tables around that time?”

“Do we wanna look for other tables around that time?”

Softens potential imposition, creates a sense of collaboration

Designing for presence in every interaction

The quirks that make LLMs so good at generating text—formality, neutrality, over-explaining—are the same things that make them sound off when dropped into a spoken dialogue. The job of a conversation designer isn’t just to rein in those habits, but to actively shape interactions that feel natural, collaborative, and human.

That means knowing when brevity builds trust, when detail supports understanding, and when context speaks for itself without needing to be spelled out.

Designing for Social Presence is about building dialogue that feels like someone is really there with you. In customer service, that sense of presence makes all the difference.


At PolyAI, our dialogue design team focuses on turning these quirks into the most human conversations for our customers. They craft entire agentic experiences, including how a customer service AI reasons, responds, and adapts to complex customer needs. It’s problem-solving at the intersection of language, psychology, and technology. To learn more, schedule a demo.

Ready to hear it for yourself?

Get a personalized demo to learn how PolyAI can help you
 drive measurable business value.

Request a demo

Request a demo