New AI Model CaDDi Bridges Gap Between Diffusion and Language Models

Researchers introduce CaDDi, a non-Markovian discrete diffusion model improving text generation.

A new AI model called CaDDi has been developed, enhancing discrete diffusion models by removing the Markovian assumption. This allows for more expressive and accurate text generation, narrowing the performance gap with large language models. CaDDi also reuses pre-trained LLM weights.

Katie Rowan

By Katie Rowan

October 31, 2025

4 min read

New AI Model CaDDi Bridges Gap Between Diffusion and Language Models

Key Facts

  • CaDDi (Causal Discrete Diffusion Model) is a new discrete diffusion model.
  • It lifts the Markovian assumption by conditioning on the entire generative trajectory.
  • CaDDi unifies sequential (causal) and temporal (diffusion) reasoning.
  • It allows direct reuse of pretrained LLM weights without architectural changes.
  • CaDDi outperforms state-of-the-art discrete diffusion baselines on natural-language benchmarks.

Why You Care

Ever wonder why some AI-generated text still feels a bit off, even with models? What if AI could generate text that’s not only fluent but also consistently coherent over long stretches? A new creation in AI, called CaDDi, promises to make significant strides in how AI creates structured sequences, particularly in natural language. This could mean more accurate and natural-sounding AI assistants, better content creation tools, and even more chatbots for your business.

What Actually Happened

Researchers have introduced CaDDi (Causal Discrete Diffusion Model), a novel approach to discrete diffusion models, according to the announcement. This new model addresses a key limitation in previous diffusion models: their reliance on the Markovian assumption. This assumption meant that each step in the generation process only considered the previous state, potentially leading to errors that couldn’t be corrected later. CaDDi changes this by conditioning on the entire generative trajectory, allowing the model to revisit and refine past states as detailed in the blog post.

By unifying sequential (causal) and temporal (diffusion) reasoning, CaDDi also treats standard causal language models as a special case. This design permits the direct reuse of pretrained Large Language Model (LLM) weights without needing architectural changes. The team revealed that this creation significantly improves performance on natural-language benchmarks.

Why This Matters to You

This new model, CaDDi, offers practical benefits for anyone interacting with or developing AI. Imagine you’re using an AI to draft an email or a blog post. Current diffusion models might generate a great sentence, but then lose track of the overall context, leading to inconsistencies. CaDDi, by looking at the whole “story” it’s creating, can produce more coherent and contextually relevant text. This means less editing for you and more reliable AI output.

For example, if you’re using an AI for creative writing, CaDDi could help maintain character consistency or plot arcs much more effectively. The study finds that “CaDDi outperforms discrete diffusion baselines on natural-language benchmarks, substantially narrowing the remaining gap to large autoregressive transformers.” This means the text generated by CaDDi is getting much closer to the quality of leading large language models. How much smoother could your AI interactions become with this improved coherence?

Here’s a quick look at the key improvements:

FeatureTraditional Discrete DiffusionCaDDi (Causal Discrete Diffusion Model)
Markovian AssumptionYes (limits context)No (considers full trajectory)
Error CorrectionDifficult, accumulatesPossible, revisits past states
LLM Weight ReuseNot directDirect, no architectural changes
Expressive PowerLags behind LLMsSubstantially narrowed gap to LLMs

The Surprising Finding

Here’s the twist: traditionally, discrete diffusion models have struggled to match the expressive power of causal language models. However, CaDDi manages to bridge this gap significantly. The research shows that CaDDi not only outperforms existing discrete diffusion models but also substantially narrows the remaining gap to large autoregressive transformers. This is surprising because diffusion models and causal language models typically operate on different principles. CaDDi’s ability to unify these two reasoning types – sequential (like causal LMs) and temporal (like diffusion models) – within a single non-Markovian transformer challenges the assumption that these approaches are inherently separate in their highest performance tiers. It suggests a more integrated future for AI text generation.

What Happens Next

Looking ahead, we can expect to see CaDDi’s principles integrated into various AI applications. The paper states that this research was presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025). This suggests further creation and adoption could occur within the next 12-18 months. Imagine a future where your AI writing assistant, powered by CaDDi-like system, can generate entire reports or creative stories with remarkable consistency and depth.

For businesses, this could mean more AI content generation tools that require less human oversight. For developers, the ability to reuse pre-trained LLM weights means faster iteration and potentially more efficient model creation. The team revealed that this approach offers a pathway for discrete diffusion models to catch up with and potentially surpass the capabilities of current large language models in specific tasks. Your next AI interaction could feel much more natural and intelligent, thanks to these ongoing advancements in non-Markovian discrete diffusion.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice