LLMs Get Smarter by Being Less Specific: A New AI Approach

New research introduces 'Selective Abstraction' to boost AI reliability in long-form content.

Large Language Models (LLMs) often make factual errors, limiting their use in critical applications. New research from Shani Goren and colleagues proposes Selective Abstraction (SA), a method for LLMs to reduce specificity when uncertain. This approach significantly improves reliability while retaining most of the original information.

Katie Rowan

By Katie Rowan

February 15, 2026

4 min read

LLMs Get Smarter by Being Less Specific: A New AI Approach

Why You Care

Ever been frustrated by an AI chatbot confidently stating something incorrect? What if AI could tell you when it’s unsure, but still give you useful information? Large Language Models (LLMs) are everywhere, yet their tendency to make factual errors can erode your trust. This new creation directly addresses that problem. It aims to make AI more reliable for everyone, from content creators to everyday users.

What Actually Happened

Researchers Shani Goren, Ido Galil, and Ran El-Yaniv have introduced a new structure called Selective Abstraction (SA). This structure allows LLMs to trade specificity for reliability, as mentioned in the release. Instead of simply refusing to answer when uncertain, the model reduces the detail of uncertain content. The team revealed that traditional “all-or-nothing” approaches are too restrictive for long-form generation. They often discard valuable information unnecessarily. Selective Abstraction offers a more nuanced approach to this challenge.

The core idea is “Atom-wise Selective Abstraction.” This decomposes responses into atomic claims—short, self-contained statements. If an LLM is uncertain about a specific atomic claim, it replaces it with a higher-confidence, less specific abstraction. This process helps maintain factual accuracy without losing the entire context. The technical report explains this method in detail.

Why This Matters to You

This creation means more trustworthy AI interactions for your daily tasks. Imagine you’re using an AI to draft a report. Instead of getting a confidently wrong detail, the AI might provide a more general, but factually correct, statement. This allows you to verify specifics yourself. How often do you wish your AI tools were more dependable?

For example, if you ask an LLM about a niche historical event, it might generalize a specific date to a broader period. This prevents it from fabricating a precise, but incorrect, date. The company reports that this method significantly improves reliability.

“LLMs are widely used, yet they remain prone to factual errors that erode user trust and limit adoption in high-risk settings,” according to the announcement. This highlights the essential need for solutions like Selective Abstraction. It’s about getting useful information, even if it’s less precise, rather than getting precise but incorrect information.

Here’s how Selective Abstraction compares to traditional methods:

FeatureTraditional AbstentionSelective Abstraction
Uncertainty HandlingAll-or-nothingGradual reduction
Information LossHighLow
ReliabilityImproved (but restrictive)Improved (flexible)
User ExperienceFrustrating (gaps)More consistent

The Surprising Finding

Here’s the twist: the research shows that making LLMs less specific can actually make them more accurate and reliable. You might think more detail is always better from an AI. However, the study finds that when LLMs are uncertain, attempting to be overly specific leads to errors. By selectively reducing detail, the models maintain factual correctness. This challenges the common assumption that AI should always strive for maximum specificity.

Specifically, atom-wise SA consistently outperforms existing baselines. The team revealed it improved the area under the risk-coverage curve (AURC) by up to 27.73% over claim removal. This means that by being less specific, LLMs preserve most of their original meaning while boosting accuracy. It’s a counterintuitive but highly effective strategy for improving Large Language Model performance.

What Happens Next

This research paves the way for more dependable AI applications in the near future. We could see initial integrations of Selective Abstraction in commercial LLMs within the next 6 to 12 months. Imagine using an AI writing assistant that flags uncertain claims. It then offers a more generalized, yet correct, alternative. This could significantly enhance your workflow.

For example, content creators might find their AI-generated drafts require less fact-checking. This is especially true for complex or rapidly evolving topics. The industry implications are vast, particularly for high-stakes environments like legal or medical text generation. The documentation indicates that this method could reduce the risks associated with AI deployment. Our advice to you: keep an eye on updates from major LLM providers. They will likely incorporate similar reliability features. This will make your interactions with AI much more and trustworthy.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice