S-RAG Boosts AI's Aggregative Question Answering

New research introduces Structured RAG to improve how AI handles complex, multi-document queries.

A new study reveals S-RAG, a novel approach to Retrieval-Augmented Generation (RAG), significantly outperforms existing AI systems in answering complex, aggregative questions. This method structures information at ingestion, allowing AI to reason over large datasets more effectively. It addresses a crucial gap in current RAG capabilities.

Sarah Kline

By Sarah Kline

November 18, 2025

4 min read

S-RAG Boosts AI's Aggregative Question Answering

Key Facts

  • S-RAG is a new approach for Retrieval-Augmented Generation (RAG).
  • It is designed to answer aggregative questions requiring information from many documents.
  • S-RAG constructs a structured representation of the corpus during ingestion.
  • It translates natural-language queries into formal queries at inference time.
  • S-RAG substantially outperforms common RAG systems and long-context LLMs.

Why You Care

Have you ever asked an AI a complex question, only for it to struggle with combining information from many sources? Imagine trying to get a comprehensive answer about market trends across several reports. This is a common challenge for current AI systems. New research introduces S-RAG, a method designed to make AI much smarter at these types of complex queries. This could fundamentally change how you interact with AI for research and data analysis.

What Actually Happened

A team of researchers, including Omri Koshorek and Alan Arazi, recently unveiled a new approach called S-RAG. This stands for Structured Retrieval-Augmented Generation. As detailed in the abstract, S-RAG aims to bridge a significant gap in how AI handles aggregative questions. Current RAG systems excel when only a small part of a corpus is relevant. However, they often fail with queries requiring information from many documents. The company reports that S-RAG constructs a structured representation of the corpus during ingestion. Then, at inference time, it translates natural-language queries into formal queries over this representation. This allows for more data gathering and reasoning.

To validate this new approach, the team introduced two new datasets. These are named HOTELS and WORLD CUP. The research shows that experiments with S-RAG on these datasets, and a public benchmark, demonstrated superior performance. It substantially outperformed both common RAG systems and long-context Large Language Models (LLMs).

Why This Matters to You

This creation is crucial if you rely on AI for in-depth information retrieval. Think of it as upgrading your AI assistant from a simple fact-finder to a skilled researcher. Instead of getting isolated snippets, you can expect synthesized answers from multiple sources. For example, imagine you’re planning a complex trip. You might ask, “What are the average hotel prices in Paris for July, considering options with a pool and within walking distance of major attractions?” A standard RAG might struggle to combine all these criteria across many hotel listings. S-RAG, however, is built for exactly this kind of query.

This enhanced capability means your AI can tackle more nuanced questions. It can provide insights that require drawing connections across disparate pieces of information. The paper states, “Retrieval-Augmented Generation (RAG) has become the dominant approach for answering questions over large corpora.” This new S-RAG method refines that dominance for complex scenarios. How much more efficient could your research become with an AI that truly understands aggregation?

Here’s how S-RAG improves AI’s capabilities:

FeatureStandard RAG LimitationS-RAG betterment
Query TypeSmall, specific factsAggregative, multi-document
Data HandlingParagraph-level retrievalStructured corpus representation
ReasoningLimited cross-documentEnhanced, formal queries
PerformanceStruggles with complex queriesSubstantially outperforms others

The Surprising Finding

Here’s the twist: despite the widespread adoption of RAG, the study finds that current methods are “highly focused on cases where only a small part of the corpus (usually a few paragraphs) is relevant per query.” This means that while RAG is effective for many tasks, it has a blind spot. It struggles with questions that demand gathering and reasoning over a large set of documents. This challenges the common assumption that simply expanding context windows in LLMs is enough. The team revealed that S-RAG significantly outperforms even long-context LLMs in these aggregative tasks. This suggests that the structure of information retrieval is more essential than just the volume of information an LLM can process at once.

What Happens Next

The introduction of S-RAG and its accompanying datasets (HOTELS and WORLD CUP) marks an important step. These resources will likely spur further research and creation in the field. We can expect to see more AI applications emerge in the next 12-18 months. These will be capable of handling increasingly complex data analysis. For example, imagine a financial analyst using an S-RAG powered system. They could ask for a summary of all quarterly earnings calls from a specific industry over five years. This would include specific mentions of supply chain issues and their impact on revenue. The system would then provide a structured, data-driven answer.

Developers and researchers should explore integrating S-RAG principles into their own systems. This could lead to more AI assistants and enterprise search solutions. The documentation indicates that the approach enables better reasoning over large, diverse datasets. Therefore, the industry implications are significant for any sector dealing with vast amounts of information. This includes legal, medical, and scientific research. Your future AI interactions could become far more insightful and comprehensive.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice