LLMs Can 'Read Their Own Minds' for Better Confidence Scores

New research shows how reasoning helps AI models understand their own uncertainty.

A recent study reveals that Large Language Models (LLMs) can improve their self-reported confidence by engaging in a 'chain-of-thought' reasoning process. This method helps models accurately assess their own uncertainty, moving beyond initial overconfidence. The findings suggest a path toward more reliable AI responses.

Mark Ellison

By Mark Ellison

November 7, 2025

4 min read

LLMs Can 'Read Their Own Minds' for Better Confidence Scores

Key Facts

  • LLMs, like DeepSeek R1-32B, are often overconfident in their default answer-then-confidence setting.
  • Semantic entropy, derived from sampling multiple responses, provides a more reliable measure of uncertainty.
  • Forcing a 'chain-of-thought' reasoning process significantly improves an LLM's self-reported verbal confidence.
  • This improvement occurs even for simple fact-retrieval questions.
  • A separate 'reader model' can infer confidence levels just by analyzing the LLM's reasoning chain.

Why You Care

Ever wonder if an AI truly knows what it’s talking about? When an LLM gives you an answer, how confident is it really? New research indicates that Large Language Models (LLMs) can gain a better understanding of their own certainty. This means your interactions with AI could soon become much more reliable. Imagine an AI that not only answers your questions but also tells you how sure it is about that answer. This creation could change how you trust AI-generated information.

What Actually Happened

Researchers Jakub Podolak and Rajeev Verma investigated the source of uncertainty in the DeepSeek R1-32B model. Their study focused on how this LLM expresses its self-reported verbal confidence during question-answering tasks. Initially, the model often displayed overconfidence, according to the announcement. This was true even when its actual semantic entropy—a measure of its predictive distribution—suggested otherwise. Semantic entropy is obtained by sampling many different responses. The team hypothesized that semantic entropy’s reliability stems from its need for more test-time compute. This allows the model to explore its potential answers more deeply. The research shows that giving DeepSeek the computational budget to explore its distribution significantly improved its verbal confidence effectiveness. This was achieved by forcing a long chain-of-thought process before the final answer, as detailed in the blog post.

Why This Matters to You

This research has direct implications for anyone who uses or develops AI. Think about asking an AI for medical advice or financial planning. You need to know if the AI is guessing or truly confident. This study suggests a way to make AI responses more trustworthy for you. It means LLMs could soon provide answers with a clearer indication of their internal certainty. This is vital for essential applications.

Key Findings on LLM Confidence:

  • Initial Overconfidence: LLMs often report high confidence even when uncertain.
  • Semantic Entropy: A reliable measure of uncertainty, but requires more computation.
  • Chain-of-Thought: Improves verbal confidence effectiveness by allowing deeper exploration.
  • Reader Model: Can reconstruct similar confidences just by seeing the reasoning chain.

For example, imagine you are using an AI assistant to research complex legal precedents. Instead of just getting an answer, the AI could provide its response alongside a confidence score. This score would be based on its internal reasoning process. This helps you understand the reliability of the information. How much more would you trust an AI that openly communicates its level of certainty? This approach helps you make more informed decisions. “Reliable uncertainty estimation requires explicit exploration of the generative space,” the paper states. It adds that “self-reported confidence is trustworthy only after such exploration.”

The Surprising Finding

Here’s the twist: the research uncovered that even simple fact-retrieval questions benefited from this extensive reasoning. You might assume that basic questions require no complex thought process from an LLM. However, the study found that forcing a long chain-of-thought still greatly improved the DeepSeek model’s verbal score effectiveness. This occurred even for questions that normally require no reasoning, according to the team. What’s more, a separate reader model could reconstruct very similar confidences just by observing the chain of thought. This indicates that the verbal score might simply be a statistic of the alternatives surfaced during reasoning. This challenges the common assumption that LLMs inherently ‘know’ their confidence without explicit introspection. It suggests that confidence is an emergent property of deep exploration.

What Happens Next

These findings point towards a future where LLMs are more transparent about their knowledge limits. We can expect to see more models incorporating explicit reasoning steps to improve their self-confidence signals. This could become standard practice within the next 6 to 12 months. Developers might integrate these chain-of-thought mechanisms into their AI architectures. This would provide users with more nuanced and trustworthy outputs. For example, future AI tools could offer a ‘confidence meter’ alongside every answer. This would allow you to gauge the reliability of the information instantly. The industry implications are significant, leading to more and accountable AI systems. Actionable advice for readers is to look for AI tools that offer transparency about their confidence levels. This research was presented at the UncertaiNLP Workshop at EMNLP 2025, highlighting its relevance for the future of AI. This suggests a continued focus on improving AI’s ability to ‘read its own mind.’

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice