New Metric Measures LLM Efficiency: 'Information Capacity'

Researchers introduce a unified way to evaluate large language models based on text compression and computational cost.

A new metric called 'information capacity' has been proposed to evaluate the efficiency of large language models (LLMs). This metric links an LLM's text compression performance to its computational cost, offering a consistent way to compare models and predict performance across different sizes and architectures. It also uniquely considers tokenizer efficiency.

Mark Ellison

By Mark Ellison

November 15, 2025

4 min read

New Metric Measures LLM Efficiency: 'Information Capacity'

Key Facts

  • A new metric called 'information capacity' evaluates LLM efficiency.
  • Information capacity links text compression performance to computational complexity.
  • The metric considers tokenizer efficiency, often overlooked in other evaluations.
  • Empirical evaluations on 49 models across 5 datasets showed consistent results.
  • Models of varying sizes within a series exhibited consistent information capacity.

Why You Care

Ever wonder if your favorite large language model (LLM) is truly efficient, or just a resource hog? With LLMs becoming central to so many applications, understanding their true efficiency is crucial. This new research introduces a metric that could change how we evaluate these AI tools. It helps you understand which models deliver the most bang for their computational buck.

What Actually Happened

Recent advancements in large language models (LLMs) have led to a huge demand for computational resources, according to the announcement. The widespread use of test-time scaling further increases this demand. This highlights the importance of inference efficiency – how well an LLM performs given its computational cost. However, a single, unified metric to accurately reflect an LLM’s efficiency across different sizes and architectures has been missing. Researchers Cheng Yuan, Jiawei Shao, Chi Zhang, and Xuelong Li have now introduced “information capacity.” This new measure evaluates model efficiency based on its text compression performance relative to its computational complexity. The technical report explains that larger models generally predict the next token more accurately. This leads to greater compression gains but also higher computational costs. The team revealed that empirical evaluations on mainstream open-source models show consistent information capacity within a model series, even with varying sizes.

Why This Matters to You

This new metric offers a fair way to compare the efficiency of different LLM series. It also allows for accurate performance prediction within a single model series, as detailed in the blog post. Imagine you are choosing an LLM for your next project. This metric could help you pick the most efficient one for your budget and performance needs. Think of it as a ‘miles per gallon’ rating for AI models. What’s more, a distinctive feature of information capacity is its incorporation of tokenizer efficiency. This factor affects both input and output token counts, but it is often neglected in standard LLM evaluations. This means the metric provides a more complete picture of an LLM’s real-world performance. “This metric enables a fair efficiency comparison across model series and accurate performance prediction within a model series,” the paper states. How might a standardized efficiency score influence your choice of AI tools in the future?

Here’s what the information capacity metric considers:

  • Text Compression Performance: How well the LLM can predict and compress text.
  • Computational Complexity: The resources (like processing power) needed for the LLM to operate.
  • Tokenizer Efficiency: How the model’s tokenizer handles input and output tokens.

The Surprising Finding

One surprising aspect of this research is how consistently the information capacity metric performed. The study finds that models of varying sizes within a series exhibit consistent information capacity. This challenges the assumption that larger models are always disproportionately less efficient. The team assessed the information capacity of 49 models on 5 heterogeneous datasets. They observed consistent results regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts architecture. This suggests that the underlying design of a model series might dictate its efficiency profile more than its sheer size. It implies that simply scaling up a model might not drastically alter its fundamental efficiency profile, as long as it remains within the same architectural family.

What Happens Next

This research, submitted on November 11, 2025, provides a crucial tool for the ongoing creation of LLMs. Developers and researchers can use information capacity to make more informed decisions when designing and deploying models. For example, a company building a new AI assistant could use this metric to select an LLM that balances performance with operational costs. This could lead to more sustainable and cost-effective AI solutions. The industry implications are significant, potentially guiding future LLM benchmarks and optimization efforts. Your own projects might soon benefit from models selected not just for their capabilities, but also for their efficiency. The documentation indicates that future work will likely explore how to further improve models based on this new metric. This could lead to a new era of ‘lean AI’ creation.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice