AI's Gender Bias: More Representation, Same Old Stereotypes

New research reveals generative AI models perpetuate harms despite increased female representation.

A recent study accepted by NeurIPS 2025 uncovers a critical flaw in generative AI: simply increasing the representation of women in outputs does not eliminate harmful stereotypes. Researchers found that while women appear more frequently in AI-generated biographies, underlying biases in how they are described persist, reinforcing existing systems of oppression.

Sarah Kline

By Sarah Kline

November 1, 2025

4 min read

AI's Gender Bias: More Representation, Same Old Stereotypes

Key Facts

  • The study investigated gender representation in occupations generated by large language models (LLMs).
  • Women are more represented than men when LLMs generate biographies or personas.
  • Representational biases persist in *how* different genders are described, despite increased female representation.
  • Statistically significant word differences across genders were found, perpetuating stereotypes.
  • The research was accepted by the 39th Conference on Neural Information Processing Systems (NeurIPS 2025).

Why You Care

Ever wonder if the AI tools you use are truly fair? What if efforts to make AI more inclusive are actually masking deeper, more insidious biases? A new study reveals a essential issue: simply increasing representation in generative AI doesn’t automatically fix ingrained stereotypes. This directly impacts how you interact with AI and the information you receive.

What Actually Happened

Researchers Jennifer Mickel, Maria De-Arteaga, Leqi Liu, and Kevin Tian investigated gender representation in occupations generated by large language models (LLMs). According to the announcement, their work, titled “More of the Same: Persistent Representational Harms Under Increased Representation,” was accepted by the 39th Conference on Neural Information Processing Systems (NeurIPS 2025). The team revealed that while interventions have altered gender distribution over time, leading to more women being represented than men when models generate biographies or personas, the underlying biases remain. This means the way genders are represented still carries harmful stereotypes.

Why This Matters to You

This research highlights a significant problem with how generative AI (AI that creates new content) operates. It’s not enough for AI to just show more diverse faces; the quality and context of that representation truly matter. Imagine you’re using an AI to generate content for your business or create a story. If the AI subtly reinforces stereotypes, it could inadvertently impact your message or audience perceptions. For example, an AI might generate a biography for a female CEO that focuses on her family life, while a male CEO’s biography emphasizes professional achievements. This perpetuates harmful societal norms.

Key Findings on Representational Harms:

  • Increased Female Representation: Women are more represented than men when models generate biographies or personas.
  • Persistent Biases: Statistically significant word differences across genders reveal ongoing stereotypes.
  • Proliferation of Harms: This leads to the reinforcement of stereotypes and neoliberalism ideals.

How might these subtle biases in AI outputs shape your own perceptions or even your professional decisions?

As the paper states, “To recognize and mitigate the harms of generative AI systems, it is crucial to consider who is represented in the outputs of generative AI systems and how people are represented.” This emphasizes the dual challenge of both quantity and quality in AI representation. Your interactions with AI are constantly being shaped by these unseen biases.

The Surprising Finding

Here’s the twist: common sense might suggest that simply showing more women in AI-generated content would inherently reduce bias. However, the study finds this is not the case. The research shows that despite interventions to increase female representation, representational biases persist in how different genders are described. This is surprising because many might assume that a numerical increase in representation directly translates to a decrease in harmful stereotypes. The team revealed that statistically significant word differences across genders still exist. This challenges the assumption that ‘more’ automatically means ‘better’ or ‘fairer’ in AI outputs. It indicates a deeper, more complex problem within the models themselves.

What Happens Next

This research, accepted for NeurIPS 2025, suggests a essential need for more bias mitigation strategies in generative AI. We can expect to see future AI creation focusing not just on numerical representation, but on the qualitative aspects of how individuals are portrayed. For example, developers might implement new algorithms that analyze and correct for stereotypical language patterns in generated text. You, as a user, should remain essential of AI outputs, questioning whether the portrayals are balanced and fair. The industry implications are significant: AI developers must move beyond surface-level fixes to address the root causes of representational harms. As the technical report explains, this research highlights that current interventions, while increasing female representation, still “reinforce existing systems of oppression.”

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice