New AI Method Boosts LLM Reasoning by 8%

QuaSAR introduces 'quasi-symbolic' thinking for more robust AI decisions.

Researchers have developed QuaSAR, a new method that significantly improves how Large Language Models (LLMs) reason. By blending symbolic logic with natural language, QuaSAR enhances accuracy and consistency, making LLMs more reliable for complex tasks.

Mark Ellison

By Mark Ellison

September 6, 2025

4 min read

New AI Method Boosts LLM Reasoning by 8%

Key Facts

  • QuaSAR (Quasi-Symbolic Abstract Reasoning) is a new method to improve LLM reasoning.
  • It enhances Chain-of-Thought (CoT) reasoning by using 'quasi-symbolic' explanations.
  • QuaSAR combines symbolic elements with natural language without full formalization.
  • Experiments show up to 8% accuracy improvement on challenging reasoning tasks.
  • The method enhances robustness and consistency in LLMs.

Why You Care

Ever wonder why your favorite AI chatbot sometimes gives you a surprisingly unhelpful answer? Or perhaps it struggles with a complex problem you thought it could handle? Imagine if these AI models could think more clearly and consistently. This is precisely what new research aims to achieve. A recent announcement details a novel approach that could make Large Language Models (LLMs) much smarter and more reliable. This creation matters because it directly impacts the accuracy and robustness of the AI tools you use daily. It’s about making AI less prone to errors and more dependable.

What Actually Happened

Researchers Leonardo Ranaldi, Marco Valentino, and Andrè Freitas have introduced a new method called QuaSAR (Quasi-Symbolic Abstract Reasoning). This approach aims to improve Chain-of-Thought (CoT) reasoning in LLMs, according to the announcement. CoT is a common strategy where LLMs break down complex tasks into smaller, intermediate steps. However, as detailed in the blog post, these explanations can suffer from content biases, affecting their reliability. Full symbolic approaches, while , require a complete translation from natural language to formal languages. This process can limit efficiency and flexibility. QuaSAR offers a trade-off. It guides LLMs to operate at a higher level of abstraction. The structure allows LLMs to formalize only relevant variables and predicates. This enables symbolic elements to coexist with natural language. The team revealed that QuaSAR impacts in-context learning. It also helps in constructing demonstrations for improving smaller models’ reasoning capabilities.

Why This Matters to You

This creation means your interactions with AI could become much more precise. Think of it as giving AI a more structured way to think, without losing its ability to understand human language. The research shows that quasi-symbolic abstractions can significantly improve CoT-based methods. For example, imagine you’re using an AI assistant to plan a complex project. With QuaSAR, the AI could break down the tasks more logically. It would be less likely to make a biased or inconsistent recommendation. This leads to more reliable outcomes for your specific needs. The study finds this method enhances robustness and consistency. It specifically addresses challenging adversarial variations. These include tasks in natural language and symbolic reasoning. The authors state, “Our experiments show that quasi-symbolic abstractions can improve CoT-based methods by up to 8% accuracy, enhancing robustness and consistency on challenging adversarial variations on both natural language (i.e. MMLU-Redux) and symbolic reasoning tasks (i.e., GSM-Symbolic).” How might this improved accuracy change the way you use AI in your daily life or work?

Here are some areas where QuaSAR could make a difference:

  • Enhanced Problem Solving: LLMs could tackle more intricate logical puzzles or coding challenges with greater success.
  • Reduced Bias: The ‘quasi-symbolic’ approach helps mitigate content biases, leading to fairer and more objective AI responses.
  • Improved Consistency: Your AI interactions will become more predictable and reliable, giving you more trustworthy information.
  • Better Small Models: Even smaller, more efficient AI models can benefit from this method. This means AI could become more accessible.

The Surprising Finding

Here’s the twist: traditionally, improving AI reasoning often involved fully translating natural language into rigid formal logic. This can be cumbersome and inefficient. However, the paper states that QuaSAR achieves significant improvements without this complete formalization. The team revealed that quasi-symbolic abstractions can improve CoT-based methods by up to 8% accuracy. This betterment applies to both natural language tasks like MMLU-Redux and symbolic reasoning tasks such as GSM-Symbolic. This is surprising because it challenges the assumption that symbolic translation is always necessary for AI reasoning. Instead, a partial, ‘quasi-symbolic’ approach yields substantial gains. It demonstrates that a hybrid method, blending the flexibility of natural language with the precision of symbolic elements, can be remarkably effective. This approach offers a practical middle ground.

What Happens Next

This research, submitted in February 2025 and revised in September 2025, points to a future where AI reasoning is more dependable. We can expect to see this ‘quasi-symbolic’ approach integrated into future LLM architectures. For example, imagine a financial analysis tool that uses AI. With QuaSAR, it could analyze market data and economic reports with higher accuracy. It would provide more reliable predictions. This would be due to its enhanced logical reasoning. The documentation indicates that QuaSAR could also be used to construct demonstrations. This means it can help train smaller models more effectively. For you, this translates to more intelligent and less error-prone AI applications across various industries. Expect to see these advancements rolled out in the coming months, potentially within the next 12-18 months. This will affect everything from customer service chatbots to complex scientific research assistants. This creation promises a future where AI’s reasoning capabilities are both and consistently reliable.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice