LawFlow: AI Reveals How Lawyers Really Think

New research introduces LawFlow, a dataset capturing the complex decision-making of legal professionals.

A new dataset called LawFlow maps the intricate thought processes of lawyers tackling business formation cases. This research highlights current limitations of large language models (LLMs) in complex legal tasks. It also suggests how AI can better support legal work in the future.

Katie Rowan

By Katie Rowan

September 5, 2025

4 min read

LawFlow: AI Reveals How Lawyers Really Think

Key Facts

  • LawFlow is a new dataset capturing end-to-end legal workflows from trained law students.
  • It focuses on business entity formation scenarios, unlike prior input-output datasets.
  • The research compares human and LLM-generated workflows, finding systematic differences.
  • Human workflows are modular and adaptive; LLM workflows are sequential and exhaustive.
  • Legal professionals prefer AI for supportive roles like brainstorming, not end-to-end execution.

Why You Care

Ever wondered how a lawyer truly thinks through a complex case, beyond just finding answers? Do you use AI in your daily work? A new study reveals fascinating insights into the intricate, often messy, thought processes of legal experts. This research could change how you interact with AI in professional settings. It offers a fresh perspective on AI’s role in supporting human intelligence, not replacing it.

What Actually Happened

Researchers have unveiled LawFlow, a novel dataset designed to capture the complete, end-to-end legal workflows of trained law students. This information is grounded in real-world business entity formation scenarios, as detailed in the blog post. Unlike previous datasets, which often focus on simple input-output pairs, LawFlow records the dynamic and iterative reasoning processes lawyers use. This includes handling ambiguity, revising strategies, and adapting to client needs. The technical report explains that this dataset aims to bridge a significant gap. It addresses the narrow focus of current AI models on isolated subtasks within the legal field. The team revealed that LawFlow allows for a direct comparison between human and large language model (LLM) generated workflows. This comparison highlights key differences in structure and reasoning flexibility.

Why This Matters to You

This new research has practical implications for anyone working with AI, especially in fields requiring complex problem-solving. LawFlow shows that human legal workflows are modular and adaptive. In contrast, LLM workflows are often sequential and exhaustive, according to the announcement. This means current AI struggles with the nuanced, flexible thinking humans excel at. For example, imagine you are a content creator. You use AI to draft articles. LawFlow suggests that while AI can generate text, it might miss subtle connections or fail to adapt its strategy if the initial direction changes. How do you see AI supporting your own creative process?

This study also offers a clear preference from legal professionals regarding AI’s role. The research shows that legal professionals prefer AI to carry out supportive roles. This includes tasks such as brainstorming, identifying blind spots, and surfacing alternatives. They do not prefer AI to execute complex workflows end-to-end. As mentioned in the release, “Human workflows tend to be modular and adaptive, while LLM workflows are more sequential, exhaustive, and less sensitive to downstream implications.” This quote emphasizes the difference in how humans and AI approach problems. It suggests a future where AI acts as a smart assistant rather than a primary decision-maker. This is particularly relevant for your own work. Think of it as having a highly intelligent research assistant, not a replacement for your own judgment.

Here’s a look at how human and LLM workflows differ:

FeatureHuman WorkflowsLLM Workflows
StructureModular, adaptiveSequential, exhaustive
ReasoningFlexible, iterativeLess flexible
Plan ExecutionContext-sensitiveLess sensitive
Client AdaptationHighLow

The Surprising Finding

Here’s the twist: despite the capabilities of large language models, the study finds they exhibit systematic differences from human thought processes. While LLMs are good at generating sequential, exhaustive responses, they are less sensitive to downstream implications. This is surprising because we often assume AI can mimic human reasoning closely. The paper states that “Our findings also suggest that legal professionals prefer AI to carry out supportive roles, such as brainstorming, identifying blind spots, and surfacing alternatives, rather than executing complex workflows end-to-end.” This challenges the common assumption that AI should automate entire complex tasks. Instead, it highlights AI’s strength in assisting, not fully executing. It suggests a more collaborative future for AI in professional fields.

What Happens Next

The LawFlow dataset, accepted at COLM 2025, points towards a future where AI systems are more collaborative and reasoning-aware. Developers will likely focus on building AI tools that enhance human capabilities. This could mean new AI features appearing in legal tech software by late 2025 or early 2026. For example, imagine an AI tool that helps you brainstorm novel solutions for a problem. It could also identify potential weaknesses in your plan. The documentation indicates that all data and code are available, which will accelerate further research. This open access allows other researchers to build upon these findings. Your actionable takeaway is to look for AI tools that act as intelligent co-pilots. These tools should help you refine your ideas and spot potential issues. They should not try to take over your core decision-making. The industry implications are clear: AI in professional services will likely evolve into a support system, not a complete automation approach.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice