AI vs. Humans: Who Picks Better Judges for Startups?

A new study compares AI and human performance in high-stakes judge assignments for a major startup competition.

Researchers at Harvard's President's Innovation Challenge deployed an AI algorithm to assign judges to startup pitches. The study aimed to compare the AI's effectiveness against human expert assignments. Surprisingly, the AI performed comparably to human judges in terms of match quality.

Katie Rowan

By Katie Rowan

October 15, 2025

3 min read

AI vs. Humans: Who Picks Better Judges for Startups?

Key Facts

  • The study compared human and AI judge assignment at Harvard's President's Innovation Challenge.
  • Researchers developed an AI algorithm named Hybrid Lexical-Semantic Similarity Ensemble (HLSE).
  • The AI's performance was evaluated against human expert assignments using blinded match-quality scores.
  • The AI algorithm performed comparably to human experts in judge assignment.
  • The Harvard President's Innovation Challenge awards over $500,000 to student and alumni startups.

Why You Care

Ever wonder if an algorithm could do your job just as well, or even better, than you can? What if that job involved making essential decisions in a high-stakes environment? A recent study dives into this very question, examining whether artificial intelligence (AI) can effectively replace human judgment in complex tasks. This research could change how many industries approach expert matching, directly impacting your professional life.

What Actually Happened

Researchers tackled a essential challenge at the Harvard President’s creation Challenge (PIC), a major venture competition. This event awards over $500,000 to student and alumni startups, according to the announcement. The core problem was assigning suitable judges to diverse startup submissions. This requires deep semantic understanding and domain expertise, as mentioned in the release. The team developed an AI-based judge-assignment algorithm called Hybrid Lexical-Semantic Similarity Ensemble (HLSE). They then deployed this AI at the competition itself. The study compared the AI’s performance against human expert assignments. This comparison used blinded match-quality scores from the judges, the research shows.

Why This Matters to You

This isn’t just about startup competitions; it’s about the broader role of AI in complex decision-making. Imagine you’re organizing a large conference, needing to match speakers with appropriate session moderators. Or perhaps your company needs to pair clients with consultants based on highly specialized needs. This study suggests AI could be a valuable tool. The research indicates that AI can handle tasks requiring nuanced understanding. “There is growing interest in applying artificial intelligence (AI) to automate and support complex decision-making tasks,” the paper states. This means less manual effort and potentially more consistent results for you.

Consider these potential benefits for your organization:

  • Reduced Manual Effort: Automate tedious matching processes.
  • Improved Consistency: Algorithms apply criteria uniformly.
  • Faster Assignments: AI can process data much quicker than humans.
  • Data-Driven Insights: Gain new perspectives on optimal pairings.

How might an AI-powered matching system streamline a process in your current role?

The Surprising Finding

Here’s the unexpected twist: the AI algorithm performed almost identically to human experts. The study found that the AI’s match quality scores were statistically indistinguishable from those made by human experts. Specifically, the evaluation showed an AUC (Area Under the Curve) of 0.48, with a p-value of 0.403.903.94. This challenges the common assumption that human intuition is always superior in tasks demanding semantic understanding and domain expertise. It suggests that algorithms can achieve a similar level of effectiveness in these complex scenarios. This finding is significant because it opens doors for AI in areas previously thought to be exclusively human domains.

What Happens Next

This research paves the way for wider adoption of AI in expert matching and complex decision-making. We might see similar AI systems implemented in other high-stakes environments within the next 12-18 months. For example, think about grant proposal reviews or peer-review processes for academic journals. These systems could significantly improve efficiency and fairness. Companies should start exploring how AI could assist their internal resource allocation. The team revealed that their Hybrid Lexical-Semantic Similarity Ensemble (HLSE) algorithm offers a viable alternative to human assignment. This could free up human experts to focus on more strategic, less administrative tasks. The industry implications are clear: AI is ready to take on more roles than many previously believed.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice