Laude Institute Funds AI Evaluation with 'Slingshots' Grants

The new program aims to boost AI research and development beyond traditional academic limits.

The Laude Institute has launched its 'Slingshots' AI grants program, funding 15 projects focused on AI evaluation. This initiative provides critical resources like funding and compute power to accelerate AI research.

Mark Ellison

By Mark Ellison

November 10, 2025

3 min read

Laude Institute Funds AI Evaluation with 'Slingshots' Grants

Key Facts

  • The Laude Institute announced its first batch of 'Slingshots' AI grants.
  • The program funds 15 projects, primarily focusing on AI evaluation.
  • Grants provide funding, compute power, and product/engineering support.
  • Recipients pledge to produce a final work product, such as a startup or open-source code.
  • Projects include Terminal Bench, ARC-AGI, Formula Code, and BizBench.

Why You Care

Ever wondered how we truly know if an AI is smart enough? The Laude Institute just announced its first batch of ‘Slingshots’ AI grants. This news is important because it directly addresses a essential challenge in artificial intelligence: how to effectively evaluate AI systems. Your future interactions with AI could be much safer and more reliable thanks to these efforts.

What Actually Happened

On Thursday, the Laude Institute unveiled its initial ‘Slingshots’ AI grants program. This program is designed as an accelerator for researchers, according to the announcement. It aims to provide resources often unavailable in typical academic settings. These resources include funding, significant compute power, and essential product and engineering support. In return, grant recipients commit to producing a final work product. This could be a new startup, an open-source codebase, or another tangible artifact, as detailed in the blog post. The first group of grantees includes 15 projects. A particular focus for these projects is the complex issue of AI evaluation.

Why This Matters to You

This initiative directly impacts the reliability and safety of the AI tools you might use daily. Better evaluation means more trustworthy AI. Imagine an AI assistant that helps manage your finances. You would want to be absolutely sure it understands complex scenarios. This program helps ensure that kind of rigorous testing. The grants support diverse approaches to understanding AI capabilities.

Slingshots Grant Focus Areas

  • AI Agent Optimization: Projects like Formula Code evaluate AI’s ability to refine existing code.
  • White-Collar AI Benchmarking: BizBench proposes a comprehensive standard for business-oriented AI agents.
  • Reinforcement Learning: Some grants explore new structures for how AI learns through trial and error.
  • Model Compression: Other projects focus on making AI models more efficient without losing performance.

How confident are you in the AI systems you currently interact with? The research shows that continued evaluation on third-party benchmarks drives progress. “I do think people continuing to evaluate on core third-party benchmarks drives progress,” Yang told TechCrunch. This perspective highlights the need for independent assessment. Your trust in AI depends on these kinds of evaluation methods.

The Surprising Finding

What’s particularly interesting is the strong emphasis on AI evaluation. While developing new AI models often grabs headlines, the ‘Slingshots’ program highlights the crucial, yet often overlooked, aspect of rigorously testing them. This focus challenges the common assumption that creation solely lies in creation. Instead, the team revealed a deep commitment to ensuring AI works as intended. “I’m a little bit worried about a future where benchmarks just become specific to companies,” Yang stated, underscoring the importance of independent benchmarks. This concern suggests a potential bias if evaluation is left solely to the developers themselves.

What Happens Next

These initial 15 projects will likely unfold over the next 12 to 18 months. We can expect to see early results and open-source contributions by late 2026 or early 2027. For example, a project like Terminal Bench, which is a command-line coding benchmark, could release its findings. This could lead to more standardized ways to test AI coding tools. Your actionable takeaway is to keep an eye on these independent evaluation efforts. They will shape the future of AI safety and reliability. The industry implications are significant, pushing for more transparent and verifiable AI performance metrics.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice