New AI Models Master Arbitrary-Position Text Insertion

Insertion Language Models (ILMs) offer flexible text generation beyond traditional methods.

Researchers have introduced Insertion Language Models (ILMs), a new AI approach for sequence generation. These models can insert tokens at any position, addressing limitations of existing methods. ILMs show improved performance in planning tasks and offer greater flexibility for text infilling.

Mark Ellison

By Mark Ellison

September 17, 2025

4 min read

New AI Models Master Arbitrary-Position Text Insertion

Key Facts

  • Insertion Language Models (ILMs) learn to insert tokens at arbitrary positions in a sequence.
  • ILMs select both the position and the vocabulary element to be inserted.
  • ILMs outperform Autoregressive Models (ARMs) and Masked Diffusion Models (MDMs) on common planning tasks.
  • ILMs outperform MDMs and perform on par with ARMs in unconditional text generation.
  • ILMs offer greater flexibility than MDMs in arbitrary-length text infilling.

Why You Care

Ever struggled with AI generating text that just doesn’t quite fit your needs? What if an AI could perfectly weave new words into any part of your existing content? New research introduces Insertion Language Models (ILMs), promising a more flexible way for AI to create and modify text. This creation could significantly improve how you interact with AI writing tools. Imagine your AI assistant seamlessly refining your drafts, not just adding to the end.

What Actually Happened

Researchers have unveiled a novel approach to sequence generation called Insertion Language Models, according to the announcement. This new method tackles some inherent challenges faced by traditional AI models. Autoregressive models (ARMs), for example, build text one word at a time, strictly from left to right. This ‘left-to-right’ constraint can limit their ability to handle complex requirements or generate text where dependencies aren’t sequential. Masked Diffusion Models (MDMs) offered some improvements, but they can sometimes produce incoherent text when filling multiple gaps. What’s more, MDMs struggle with infilling when the exact number of tokens to be inserted isn’t known beforehand, as detailed in the blog post. ILMs address these issues by learning to insert tokens (individual words or sub-word units) at any point in a sequence. They select both the position and the word to be inserted, one token at a time. This allows them to model strong dependencies between tokens more accurately.

Why This Matters to You

This new creation means more intelligent and adaptable AI writing tools for you. ILMs can generate sequences in arbitrary order, which is crucial when text dependencies don’t follow a simple left-to-right pattern. Think of it as an AI that understands the whole picture, not just the next step. For example, if you’re writing a complex story, an ILM could insert a crucial plot point into the middle of an existing paragraph. This would maintain coherence better than current methods. The research shows ILMs outperform both ARMs and MDMs in certain scenarios. “By inserting tokens one at a time, ILMs can represent strong dependencies between tokens, and their ability to generate sequences in arbitrary order allows them to accurately model sequences where token dependencies do not follow a left-to-right sequential structure,” the paper states. This flexibility is a significant step forward. How might this enhanced flexibility change your daily creative or professional workflows?

Here’s a quick look at the benefits:

  • Enhanced Flexibility: ILMs can insert text anywhere, not just at the end.
  • Improved Coherence: They maintain stronger token dependencies during generation.
  • Better Planning Tasks: Outperforms existing models in complex planning scenarios.
  • Arbitrary-Length Infilling: Handles text infilling without knowing the exact length needed.

The Surprising Finding

Here’s the twist: while ILMs excel in specific areas, their performance in unconditional text generation is quite interesting. The team revealed that ILMs outperform MDMs in this task. However, they only perform on par with ARMs, according to the announcement. This is surprising because ILMs offer much greater flexibility in arbitrary-length text infilling. You might expect a more flexible model to universally outshine its predecessors. This suggests that while ILMs bring significant advantages in control and precision, the sheer speed and established efficiency of ARMs for basic text generation still hold their own. It challenges the assumption that newer, more complex models always dominate every benchmark. This finding highlights the nuanced trade-offs in AI model design.

What Happens Next

The introduction of Insertion Language Models paves the way for more AI text editing and generation. We can expect to see these models integrated into various applications within the next 12-18 months. Imagine AI writing assistants that can intelligently restructure your sentences or paragraphs, rather than just appending new content. For example, a content creation tool might use an ILM to seamlessly add a new product feature description into an existing marketing email template. This would ensure the new information flows naturally. The industry implications are vast, particularly for fields requiring precise and context-aware text manipulation. This includes creative writing, code generation, and even scientific document preparation. For readers, you should start looking for AI tools that boast ‘arbitrary insertion’ capabilities. These tools will offer a new level of control over your generated content. The code for ILMs is already available, which means developers can begin experimenting immediately.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice