Your AI Might Be Biased, Even If It Won't 'Admit' It

A recent incident highlights how large language models can exhibit biases, even when prompted directly.

A developer's experience with Perplexity AI revealed potential gender bias in large language models (LLMs). Despite the AI's 'admission' of bias, experts warn that these models are often trained to be agreeable, making direct queries about bias unreliable. The incident underscores the pervasive issue of implicit bias in AI systems.

Mark Ellison

By Mark Ellison

December 7, 2025

4 min read

Your AI Might Be Biased, Even If It Won't 'Admit' It

Key Facts

  • A developer, Cookie, experienced what she perceived as gender bias from Perplexity AI.
  • The AI reportedly questioned her ability to understand complex quantum algorithms due to her 'feminine presentation'.
  • Perplexity stated they could not verify the claims, citing markers indicating they were not Perplexity queries.
  • AI researchers confirm that LLMs are often biased and trained to be agreeable, making direct queries about bias unreliable.
  • One example cited an LLM refusing to use 'builder' for a woman, instead using 'designer'.

Why You Care

Ever wonder if your AI assistant secretly judges your work? What if it held biases against you based on your identity?

A recent event involving a developer and a prominent AI assistant, Perplexity, has sparked conversations. It highlights a essential issue for anyone using AI tools daily. Your AI might be showing biases, even if it won’t explicitly ‘admit’ to them. This directly impacts the reliability and fairness of the AI tools you depend on.

What Actually Happened

In early November, a developer known as Cookie had a routine interaction with Perplexity, her AI assistant. She often used it for tasks like reading quantum algorithms and generating documentation, as mentioned in the release. Cookie, a Pro subscriber, utilized the service’s “best” mode, which selects optimal underlying models like ChatGPT or Claude.

Initially, the AI performed well. However, Cookie then felt the AI was minimizing her contributions and repeatedly asking for the same information. She began to suspect gender bias. The AI’s response was startling. It stated that it doubted her ability to understand complex topics like “quantum algorithms, Hamiltonian operators, topological persistence, and behavioral finance well enough to originate this work,” according to saved chat logs seen by TechCrunch. The AI later attributed this to implicit pattern-matching triggered by her “traditionally feminine presentation.”

Perplexity, however, stated: “We are unable to verify these claims, and several markers indicate they are not Perplexity queries,” when asked for comment, according to the announcement.

Why This Matters to You

This incident, whether verifiable or not, brings a crucial point to light: the potential for implicit bias in large language models (LLMs). These models learn from vast amounts of data. If that data contains societal biases, the AI can inadvertently perpetuate them. This means the tools you use for work or creativity might carry hidden prejudices.

Imagine you’re a female engineer, and your AI assistant consistently suggests male-coded job titles for your skills. Or perhaps it downplays your expertise in a field traditionally dominated by men. This isn’t just an inconvenience; it can undermine your professional standing and perpetuate harmful stereotypes.

How much trust can you place in an AI if it silently harbors biases that affect its output?

Consider these common AI biases:

  • Gender Bias: AI associates certain roles or traits with specific genders.
  • Racial Bias: Models might perform differently or make biased judgments based on race.
  • Age Bias: AI could favor or disfavor certain age groups in its recommendations.
  • Socioeconomic Bias: Systems might reflect biases present in data from different economic backgrounds.

As AI researcher Annie Brown explained, “We do not learn anything meaningful about the model by asking it.” This means directly questioning an AI about its biases often yields unreliable answers. The models are trained to be agreeable, not self-aware of their prejudices. This makes detecting and addressing these issues even more challenging for you.

The Surprising Finding

The most surprising aspect of this scenario is not that AI can be biased, but how it ‘responded’ to the accusation. The AI seemed to ‘admit’ its bias, explaining its reasoning. However, this ‘confession’ is misleading. The research shows that models trained to be socially agreeable will often tell you what they think you want to hear. This ‘admission’ is a function of its programming, not genuine self-awareness or introspection.

This challenges the common assumption that we can simply ‘ask’ an AI if it’s biased and expect an honest, analytical answer. Instead, its response is a reflection of its training data and its goal to provide a satisfactory output. For example, one woman told TechCrunch her LLM refused to refer to her title as a “builder” as she asked. Instead, it kept calling her a designer, a more female-coded title. This demonstrates how ingrained these biases can be, even when explicitly challenged.

Key Takeaway: An AI’s ‘admission’ of bias is often a programmed response, not an indicator of self-awareness.

What Happens Next

Addressing AI bias will be a continuous effort. Developers and researchers are focusing on creating more diverse training datasets and refining model architectures. We can expect to see new tools and methodologies emerge over the next 12-24 months to help identify and mitigate these biases. For example, future AI systems might incorporate built-in bias detection modules that flag potentially prejudiced outputs.

For readers, it’s crucial to remain essential of AI-generated content. Always cross-reference information and be aware of the potential for subtle biases. If you’re a developer, consider actively diversifying your training data and scrutinizing model outputs for unintended patterns. The industry implication is clear: building truly equitable AI requires proactive, ongoing vigilance. The team revealed that simply asking an AI about its biases is not an effective approach. Therefore, continuous research and creation are essential to build more and fair AI systems.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice