Why Your AI Gets Stuck: New Research Uncovers Repetition Roots

A study reveals two distinct mechanisms behind Large Language Models' frustrating repetitive loops.

New research by Matéo Mahaut and Francesca Franzon explores why Large Language Models (LLMs) sometimes get stuck in repetitive loops. They found that repetition caused by direct copying (in-context learning) differs fundamentally from natural, spontaneous repetition, offering crucial insights into LLM behavior.

Sarah Kline

By Sarah Kline

November 6, 2025

3 min read

Why Your AI Gets Stuck: New Research Uncovers Repetition Roots

Key Facts

  • Large Language Models (LLMs) can generate repetitive text, which is rare in human language.
  • The study identified two distinct mechanisms for repetition: in-context learning (ICL) induced and naturally occurring.
  • ICL-induced repetition relies on specialized 'attention heads' that develop during training.
  • Naturally occurring repetition emerges early in training and lacks a defined neural circuitry.
  • Natural repetition often involves low-information tokens, suggesting a fallback when context is lost.

Why You Care

Ever found your favorite AI chatbot repeating itself endlessly? It’s a common and frustrating issue. Imagine asking an AI for a story, and it keeps writing the same sentence over and over. Why does this happen, and what does it mean for the future of AI interactions? This new research dives deep into the puzzling world of Large Language Model (LLM) repetitions.

What Actually Happened

Researchers Matéo Mahaut and Francesca Franzon have investigated the mechanisms behind repetitive outputs in Large Language Models (LLMs), according to the announcement. LLMs are AI systems that generate human-like text. Sometimes, these models fall into repetitive loops, producing identical word sequences. This behavior is rare in natural human language, making its frequency in LLMs a significant puzzle, as mentioned in the release. The study contrasted two conditions for repetition. One involved natural text prompts, and the other used in-context learning (ICL) setups. ICL explicitly requires the model to copy information, for example, repeating a given pattern. The team revealed that these seemingly similar repetitions arise from distinct underlying processes.

Why This Matters to You

Understanding these different types of repetition is crucial for improving AI. It helps developers create more reliable and less frustrating AI experiences for you. If an AI understands why it’s repeating, it can learn to avoid it. Think of it as diagnosing a cough; is it allergies or something more serious? The treatment depends on the cause. This research helps pinpoint the ‘causes’ of AI repetition.

Key Differences in Repetition Mechanisms:

Repetition TypeUnderlying Mechanism
In-Context LearningRelies on a dedicated network of attention heads
Natural OccurrenceLacks a defined circuitry, emerges early in training

“Our analyses reveal that ICL-induced repetition relies on a dedicated network of attention heads that progressively specialize over training,” the paper states. This means specific parts of the AI’s ‘brain’ are trained to copy. Meanwhile, natural repetition appears to be a fallback behavior when the AI struggles to find relevant context. How might knowing this change your daily interactions with AI tools?

The Surprising Finding

Here’s the twist: not all repetitions are created equal, according to the research. While they might look the same on the surface, the study finds they originate from qualitatively different internal processes. ICL-induced repetition, which happens when an AI is specifically told to copy, develops through specialized ‘attention heads’ during training. However, naturally occurring repetition, like an AI getting stuck on a phrase, emerges early in training and doesn’t have a defined circuitry. This is surprising because you might assume all repetitions come from the same underlying flaw. Instead, the documentation indicates that natural repetition often focuses on low-information tokens – essentially, filler words – suggesting it’s a fallback when the AI can’t retrieve meaningful context. This challenges the common assumption that all AI ‘failures’ are uniform.

What Happens Next

These findings pave the way for more targeted solutions to AI repetition. Developers can now focus on different parts of the model to address each type of repetitive behavior. For example, to combat ICL-induced repetition, they might refine the training of those dedicated attention heads. To reduce natural repetition, efforts could focus on improving context retrieval mechanisms. Expect to see advancements in LLM stability and coherence in the coming months, perhaps by late 2025 or early 2026. The industry implications are significant, leading to more and less frustrating AI applications. As the team revealed, these insights reflect “distinct modes of failure and adaptation in language models,” guiding future creation towards more and human-like AI interactions. Your future AI experiences could become much smoother and more reliable.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice