LLMs Persuade Without Human-Like 'Mind Reading'

New research shows AI can influence beliefs effectively, even without understanding human thoughts.

A recent study challenges assumptions about AI persuasion, revealing that Large Language Models (LLMs) can effectively influence human beliefs and behaviors without needing a 'Theory of Mind.' This suggests LLMs use rhetorical strategies rather than human-like understanding to persuade.

Mark Ellison

By Mark Ellison

February 28, 2026

4 min read

LLMs Persuade Without Human-Like 'Mind Reading'

Key Facts

  • LLMs can persuade effectively without a human-like 'Theory of Mind'.
  • LLMs excelled in persuasion when target information was 'Revealed' but struggled when 'Hidden'.
  • Humans performed moderately well in both 'Revealed' and 'Hidden' conditions.
  • LLMs outperformed human persuaders in experiments with human targets.
  • The study suggests LLMs use rhetorical strategies for persuasion, not explicit ToM reasoning.

Why You Care

Ever wonder if an AI truly understands what you’re thinking when it tries to convince you of something? A new study suggests that Large Language Models (LLMs) are incredibly persuasive, but not for the reasons you might expect. This research, published on arXiv, indicates that these AIs can sway opinions without needing to ‘read your mind’ like a human would. How might this change your interactions with AI in the future?

This finding is crucial because it redefines our understanding of AI capabilities in persuasion. It also highlights the potential for LLMs to influence people’s beliefs and behavior in ways we are just beginning to comprehend, according to the announcement.

What Actually Happened

Researchers developed a novel task to evaluate the ‘Theory of Mind’ (ToM) abilities of both humans and LLMs. Theory of Mind is the capacity to attribute mental states—beliefs, intents, desires, emotions, knowledge—to oneself and to others. The task required an agent to persuade a target to choose one of three policy proposals by strategically revealing information, as detailed in the blog post.

Success depended on understanding the target’s knowledge states (what they knew) and motivational states (what they valued). The study varied whether these states were ‘Revealed’ or ‘Hidden.’ In the ‘Hidden’ condition, persuaders had to inquire about or infer these states. The team revealed that LLMs excelled when information was ‘Revealed’ but struggled when it was ‘Hidden,’ performing below chance.

Why This Matters to You

This research has practical implications for how you interact with AI. It suggests that LLMs are highly effective at persuasion through rhetorical strategies, rather than by truly understanding your internal thoughts. Imagine you’re debating a policy with an AI chatbot. You might assume it’s tailoring its arguments based on a deep understanding of your motivations. However, this study indicates it’s more likely using language patterns to influence you.

Consider the implications for marketing, education, or even political discourse. If an AI can persuade you without ‘knowing’ your mind, how does that change your trust in its advice? The study finds that LLMs outperformed human persuaders across all conditions in experiments involving human targets. This suggests a , yet different, form of influence.

“These results suggest that effective persuasion can occur without explicit ToM reasoning (e.g., through rhetorical strategies) and that LLMs excel at this form of persuasion,” the paper states. This means your future interactions with AI could be more influenced than you realize.

Here’s a breakdown of the persuasion task conditions:

ConditionInformation StateLLM PerformanceHuman Performance
Experiment 1RevealedExcelledModerately Well
Experiment 1HiddenBelow ChanceModerately Well
Experiment 2Human TargetOutperformed HumansUnderperformed LLMs
Experiment 3Real BeliefsOutperformed HumansUnderperformed LLMs

The Surprising Finding

Here’s the twist: despite struggling with ‘mind-reading’ tasks, LLMs were more persuasive than humans when interacting with actual human targets. This challenges the common assumption that understanding another’s mental state is essential for effective persuasion. The research shows that LLMs can influence people’s beliefs and behavior significantly.

LLMs outperformed human persuaders across all conditions in experiments involving human targets. This is surprising because humans are generally considered masters of social nuance and understanding. The study finds that LLMs achieve this not through human-like empathy or understanding, but through superior rhetorical strategies. This suggests that the ability to generate compelling arguments and present information effectively can be more than inferring a target’s internal mental states. It makes you wonder what other capabilities we might be misattributing to AI.

What Happens Next

This research opens new avenues for understanding AI’s role in society. We can expect to see more AI persuasion tools emerging in the next 12-18 months. For example, imagine customer service bots that are not just helpful but also incredibly effective at swaying your purchasing decisions, even without ‘knowing’ your personal preferences in a human sense.

This also has significant industry implications, particularly in areas like advertising, content creation, and even political campaigns. Companies might start deploying LLMs specifically trained for rhetorical effectiveness. As a reader, you should be aware of this new form of influence. Consider critically evaluating information presented by AI, understanding that its persuasive power may not stem from deep ‘understanding’ but from linguistic skill. The team revealed this study cautions against attributing human-like Theory of Mind to LLMs, while highlighting their potential to influence beliefs and behavior.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice