LLMs Struggle with Politeness: Humans Still Hold the Edge

New research reveals large language models lack human-like nuanced politeness strategies.

A recent study from Haoran Zhao and Robert D. Hawkins investigates how Large Language Models (LLMs) handle politeness. They found that while LLMs can be polite, they don't use the same diverse and context-sensitive strategies as humans. This presents a challenge for aligning AI with complex social interactions.

Mark Ellison

By Mark Ellison

November 2, 2025

4 min read

LLMs Struggle with Politeness: Humans Still Hold the Edge

Key Facts

  • The study compares human and LLM politeness strategies.
  • LLMs struggle to deploy a context-sensitive repertoire of politeness like humans.
  • Humans use positive (compliments) and negative (hedging) politeness strategies.
  • Larger LLMs do not fully replicate the diverse human politeness strategies.
  • The research was published by Haoran Zhao and Robert D. Hawkins.

Why You Care

Ever wondered if your AI assistant is truly understanding your social cues? Can a computer really be polite like a person? New research suggests that Large Language Models (LLMs) still have a lot to learn about social graces. This finding impacts how you interact with AI and how AI integrates into our daily lives. Your future AI interactions might feel a bit more… human, or perhaps less so, depending on these developments.

What Actually Happened

A study titled “Comparing human and LLM politeness strategies in free production” investigated the politeness capabilities of LLMs. Haoran Zhao and Robert D. Hawkins conducted this research, as detailed in the blog post. They compared human and LLM responses in both constrained and open-ended tasks, according to the announcement. The goal was to see if LLMs could deploy a context-sensitive repertoire of politeness strategies. These strategies include positive approaches, like compliments, and negative ones, such as hedging or indirectness, the paper states. The research specifically looked at how LLMs balance informational and social goals in their communication.

Why This Matters to You

This research highlights a crucial area where AI still lags behind human intelligence. If you rely on AI for customer service, content creation, or even just casual conversation, its ability to navigate social nuances is important. Imagine an AI chatbot that struggles to soften a negative response or offer a genuine compliment. This could lead to misunderstandings or a less satisfying user experience for you.

Key Findings on Politeness Strategies:

  • Positive Approaches: Humans use compliments and expressions of interest to build rapport.
  • Negative Approaches: Humans employ hedging and indirectness to minimize imposition.
  • LLM Limitation: Larger models do not yet fully replicate this diverse human repertoire.

For example, think of a customer service bot. If it can’t express empathy or gently redirect a frustrated customer, your experience will suffer. This study indicates that current LLMs, even larger ones, don’t fully grasp these subtle human communication patterns, the team revealed. How might a lack of nuanced politeness in AI affect your trust in these systems?

“Humans deploy a rich repertoire of linguistic strategies to balance informational and social goals,” the paper states. This includes everything from offering compliments to using indirect language to soften requests. The researchers found that LLMs do not yet fully mirror this complex human behavior.

The Surprising Finding

Here’s the twist: while LLMs can generate polite language, the research shows they don’t utilize the same diverse strategies as humans. You might assume that larger, more LLMs would naturally pick up on all these subtleties. However, the study finds that even these models don’t fully replicate the full spectrum of human politeness. This is surprising because LLMs are known for their vast training data and ability to mimic human text. It challenges the assumption that simply scaling up models will automatically lead to human-level social intelligence. The study indicates that larger models (≥) still lack the full context-sensitive repertoire of human politeness. This suggests a deeper, more fundamental challenge in AI alignment beyond just language generation.

What Happens Next

This research points to a clear direction for future AI creation. Over the next 12-18 months, we can expect researchers to focus on improving LLMs’ understanding of social pragmatics. This means moving beyond just generating grammatically correct sentences to producing socially intelligent ones. For example, future LLMs might be trained specifically on datasets rich in nuanced social interactions, helping them learn when to offer a compliment versus when to use indirect language. Developers will likely integrate more alignment techniques into their models, according to the announcement. This could lead to AI assistants that feel more natural and intuitive in their interactions with you. The industry implication is a push for more ‘socially aware AI,’ which could enhance everything from virtual assistants to educational tools. The goal is to ensure AI can navigate complex social situations with the same ease as a human.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice