LLM 'Drift' Threatens Finance, Small Models Shine

New research reveals surprising inconsistencies in large language models for financial tasks, challenging common assumptions.

A recent study highlights 'output drift' in LLMs used by financial institutions, where models produce inconsistent results. Surprisingly, smaller models demonstrated 100% consistency, while larger ones struggled. This finding has major implications for auditability and trust in AI.

Sarah Kline

By Sarah Kline

November 17, 2025

4 min read

LLM 'Drift' Threatens Finance, Small Models Shine

Key Facts

  • LLMs exhibit 'output drift' (nondeterministic outputs) in financial workflows.
  • Smaller models (Granite-3-8B, Qwen2.5-7B) achieved 100% output consistency.
  • Larger model (GPT-OSS-120B) showed only 12.5% consistency.
  • The study challenges the assumption that larger models are universally superior.
  • A three-tier model classification system and audit-ready attestation system were developed.

Why You Care

Ever relied on an AI for a crucial report, only to wonder if it would give you the same answer twice? If you’re using large language models (LLMs) for essential tasks, especially in finance, this isn’t just a hypothetical concern. New research reveals a significant problem called ‘output drift.’ This issue could undermine the very trust we place in AI. What if your AI assistant gave different answers to the same question, every time you asked?

What Actually Happened

A recent paper, “LLM Output Drift: Cross-Provider Validation & Mitigation for Financial Workflows,” by Raffi Khatchadourian and Rolando Franco, explores a essential challenge. The study investigates how Large Language Models (LLMs) used in financial institutions can produce inconsistent outputs, according to the announcement. This ‘nondeterministic’ behavior, or output drift, poses serious risks for auditability and trust in AI systems. The researchers quantified this drift across five different model architectures, ranging from 7 billion to 120 billion parameters. They focused on regulated financial tasks. The findings challenge a common belief that bigger models are always better.

Why This Matters to You

If you’re deploying LLMs in sensitive environments, understanding output drift is crucial. This research provides a structure for evaluating model reliability. It helps ensure that your AI systems meet strict compliance requirements. The study introduced a three-tier model classification system, as detailed in the blog post. This system enables risk-appropriate deployment decisions. It helps you choose the right model for the job.

Imagine you’re a financial analyst using an LLM to reconcile complex accounts. If the model provides a different reconciliation report each time you run it, how can you trust its accuracy? How would you audit its decisions? This inconsistency makes regulatory reporting a nightmare. It erodes confidence in the AI’s ability to perform its function reliably. This study helps you understand which models are truly dependable.

What steps are you taking to ensure the consistency of your AI outputs today? The research highlights that “smaller models (Granite-3-8B, Qwen2.5-7B) achieve 100% output consistency at T=0.0,” according to the paper. This means you might not need the largest, most expensive models for essential, consistent tasks. Choosing the right model size can save resources and increase reliability. What’s more, the study offers an audit-ready attestation system. This system uses dual-provider validation. This helps ensure compliance.

The Surprising Finding

Here’s the twist: the research uncovered a stark inverse relationship between model size and consistency. You might assume larger, more complex models would be more reliable. However, the study found the opposite. Smaller models, like Granite-3-8B and Qwen2.5-7B, achieved 100% output consistency when . This was at a temperature setting of T=0.0, which means greedy decoding. In contrast, the much larger GPT-OSS-120B model showed only 12.5% consistency. This was true regardless of its configuration. This finding directly challenges the conventional assumption. Many believe that larger models are universally superior for production deployment. The team revealed this surprising data point. It suggests that for tasks requiring high determinism, smaller models might be the safer bet. This could change how financial institutions approach AI adoption.

What Happens Next

This research, presented at AI4F @ ACM ICAIF ‘25 in November 2025, sets a new standard. Financial institutions will likely re-evaluate their LLM deployment strategies. They will focus more on consistency and auditability. For example, a bank might now prioritize smaller, more consistent models for regulatory compliance. They could reserve larger models for less essential, creative tasks. This shift could lead to more and trustworthy AI applications. The industry implications are significant. We can expect new tools and methodologies for validating LLM outputs. These will ensure they meet stringent financial requirements. The documentation indicates a focus on task-specific invariant checking. This includes RAG, JSON, and SQL outputs. This is all calibrated with finance-specific materiality thresholds. Your organization should begin exploring these validation techniques. This will help you secure your AI workflows. It ensures your systems are both effective and compliant.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice