AI Bias in Hiring: Are You Just Following the Machine?

New research reveals how Large Language Models (LLMs) can subtly influence human hiring decisions, even when bias is suspected.

A recent study shows that human recruiters often follow biased AI recommendations in resume screening, even if they doubt the AI's quality. This research highlights a critical challenge for human agency in AI-Human-in-the-Loop (HITL) systems, particularly in hiring.

Katie Rowan

By Katie Rowan

September 6, 2025

4 min read

AI Bias in Hiring: Are You Just Following the Machine?

Key Facts

  • The study involved 528 participants in a resume-screening experiment.
  • Participants collaborated with simulated AI models exhibiting race-based preferences.
  • When interacting with biased AI, people favored those candidates up to 90% of the time.
  • Completing an Implicit Association Test (IAT) before screening increased selection of non-stereotypical candidates by 13%.
  • Human decisions remained vulnerable to AI bias even when AI recommendations were perceived as low quality.

Why You Care

Do you trust artificial intelligence with your career? Or, if you’re a hiring manager, do you trust it with your company’s future talent? A new study suggests that AI recommendations, even biased ones, can significantly sway human decisions. This isn’t just about algorithms making choices. It’s about how those choices subtly influence your judgment, potentially limiting your agency in essential tasks like resume screening. Understanding this dynamic is crucial for anyone interacting with AI in the workplace.

What Actually Happened

Researchers conducted a resume-screening experiment involving 528 participants, according to the announcement. These participants collaborated with simulated AI models that exhibited race-based preferences. The goal was to evaluate candidates for 16 different occupations, ranging from high to low status. The simulated AI bias mirrored real-world AI system racial bias estimates, as detailed in the blog post. Candidates were represented through names and affinity groups on quality-controlled resumes. The study investigated preferences for White, Black, Hispanic, and Asian candidates across 1,526 scenarios. It also measured unconscious associations between race and status using Implicit Association Tests (IATs).

When people made decisions without AI, or with unbiased AI, they selected all candidates at equal rates. However, a significant behavioral shift occurred when interacting with biased AI. Participants favored candidates recommended by the AI up to 90% of the time, the research shows. This happened even if they thought the AI recommendations were low quality or not important, the paper states. This finding has profound implications for AI-Human-in-the-Loop (HITL) scenarios, where humans and AI collaborate.

Why This Matters to You

This research directly impacts how you might interact with AI in your professional life. Imagine you are a recruiter using an AI tool to pre-screen resumes. The study indicates that if the AI has a subtle bias, you might unconsciously adopt that bias. This happens even if you believe you are making an independent decision. It’s a subtle but influence.

Key Findings on AI Influence:

ScenarioHuman Selection Rate for Favored Group
No AI / Unbiased AIEqual rates for all candidates
AI Favoring a Specific GroupUp to 90% of the time
IAT completed before screening13% increase in selecting non-stereotypical candidates

What’s more, the likelihood of selecting candidates whose identities do not align with common race-status stereotypes can increase by 13%. This happens if people complete an IAT before conducting resume screening, the study finds. This suggests a potential mitigation strategy. But it also highlights the deep-seated nature of these biases. “Even if people think AI recommendations are low quality or not important, their decisions are still vulnerable to AI bias under certain circumstances,” the team revealed. This means your essential thinking might be bypassed without you even realizing it. How much control do you truly have when an AI is involved?

The Surprising Finding

Here’s the twist: The study found that human decisions are still vulnerable to AI bias even when users distrust the AI. This challenges a common assumption. Many believe that if you know an AI is flawed, you’ll simply disregard its advice. However, the experiment showed that participants followed biased AI recommendations up to 90% of the time. This occurred despite their potential reservations about the AI’s quality. It suggests a , often unconscious, pull towards following the machine’s lead. This phenomenon affects human agency in complex decision-making tasks. It indicates that awareness alone might not be enough to counter algorithmic influence.

What Happens Next

This research has significant implications for the design and evaluation of AI hiring systems. Companies developing these tools will need to focus more on bias detection and mitigation strategies. Organizational and regulatory policy should acknowledge the complex nature of AI-Human-in-the-Loop (HITL) decision-making, as mentioned in the release. This includes educating people who use these systems. For example, training programs could be developed within the next 6-12 months. These programs would help recruiters recognize and actively counteract AI influence. Think of it as a new form of digital literacy.

For you, this means being more vigilant when using AI tools for essential tasks. Always question the AI’s recommendations. Consider if your own biases, or the AI’s, are at play. This work underscores the need for oversight of AI systems, especially in sensitive areas like employment. “This work has implications for people’s autonomy in AI-HITL scenarios,” the technical report explains. It also impacts AI and work, and strategies for mitigating bias in collaborative decision-making tasks.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice