LLMs Learn from How You Talk, Not Just What You Say

New research introduces 'conversational geometry' to improve AI interaction.

A new study reveals that Large Language Models (LLMs) can learn from the dynamics of a conversation, not just the words exchanged. This 'conversational geometry' approach offers a privacy-preserving way to enhance AI alignment and collaboration.

Mark Ellison

By Mark Ellison

November 19, 2025

3 min read

LLMs Learn from How You Talk, Not Just What You Say

Key Facts

  • LLMs can learn from interaction dynamics, not just text content.
  • TRACE (Trajectory-based Reward for Agent Collaboration Estimation) uses 'conversational geometry'.
  • A model based solely on interaction dynamics achieved 68.20% accuracy.
  • A hybrid model combining dynamics and text achieved the highest accuracy of 80.17%.
  • This approach offers a privacy-preserving framework for AI alignment.

Why You Care

Ever feel like your AI assistant just isn’t getting you, even when your words are clear? What if the way you interact—your conversational dance—is just as important as the specific things you say? New research suggests that how an AI communicates is a predictor of success, according to the announcement. This could dramatically change how Large Language Models (LLMs) understand and respond to your needs, making them much more intuitive.

What Actually Happened

Researchers Sian Gooding and Edward Grefenstette have introduced a novel concept called ‘conversational geometry.’ This idea suggests that the dynamic flow of an interaction holds valuable information for training LLMs, as detailed in the blog post. Traditionally, LLMs learn from the content of text. However, this new approach, named TRACE (Trajectory-based Reward for Agent Collaboration Estimation), focuses on the geometric properties of a dialogue’s embedding trajectory—essentially, the pattern of back-and-forth exchanges. The team revealed that a reward model trained solely on these structural signals performed surprisingly well.

Why This Matters to You

This creation has significant implications for how you interact with AI. Imagine an AI that doesn’t just process your commands but also understands the flow of your discussion. This could lead to more natural and effective collaborations. For example, think of a customer service chatbot that can sense your frustration not just from your words, but from the rhythm and pacing of your replies. This understanding could help it de-escalate a situation more effectively.

What’s more, this method offers a privacy-preserving structure. It can align AI agents without needing to deeply analyze the sensitive content of your conversations. How much more comfortable would you be knowing an AI is learning from your interaction style rather than every word you type?

As the research shows, “a reward model trained only on these structural signals achieves a pairwise accuracy (68.20%) comparable to a LLM baseline that analyzes the full transcript (70.04%).” This means the how is nearly as effective as the what.

Here’s a quick look at the performance:

Model TypeAccuracy
Interaction Dynamics Only68.20%
Textual Analysis Only70.04%
Hybrid Model80.17%

The Surprising Finding

Here’s the twist: the research found that interaction dynamics alone are almost as effective as analyzing the full conversation transcript. The study finds that a model based only on structural signals achieved 68.20% accuracy. This is incredibly close to the 70.04% accuracy of a LLM that analyzes all the text. This challenges the common assumption that explicit content is the sole driver of successful AI-human interaction. It suggests that the subtle cues in how we communicate—like timing and turn-taking—are far more influential than previously thought. This means AI could become more attuned to human communication nuances without needing to ‘read between the lines’ of your private data.

What Happens Next

Looking ahead, we can expect to see these concepts integrated into real-world AI systems over the next 12-18 months. Imagine your virtual assistant, by late 2025, not just answering questions but also adapting its communication style based on your interaction patterns. For example, if you tend to be direct, the AI might respond more concisely. This could lead to more personalized and effective AI interactions across various industries, from education to healthcare. The documentation indicates that this structure also serves as a diagnostic tool, helping developers understand specific interaction patterns that lead to successful collaboration. Developers should consider incorporating ‘conversational geometry’ into their AI training pipelines to build more and empathetic systems. The team revealed that a hybrid model, combining both interaction dynamics and textual analysis, achieved the highest performance at 80.17%, demonstrating their complementary nature.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice