AI's Hidden Bias: Racial Stereotypes in Storytelling

New research uncovers how large language models perpetuate colonial narratives about women.

A recent study reveals that large language models, specifically LLaMA 3.2-3B, generate short stories about Black and white women that reinforce racial biases. The research, which analyzed 2100 texts, found that these AI models often perpetuate 'colonially structured framing' of the female body, despite producing grammatically coherent narratives.

Sarah Kline

By Sarah Kline

September 15, 2025

3 min read

AI's Hidden Bias: Racial Stereotypes in Storytelling

Key Facts

  • The study analyzed 2100 short stories generated by LLaMA 3.2-3B.
  • The research focused on narratives about Black and white women in Portuguese.
  • Three main discursive representations emerged: social overcoming, ancestral mythification, and subjective self-realization.
  • AI-generated texts, despite being grammatically coherent, reinforced 'colonially structured framing' of the female body.
  • The study proposes combining machine learning with qualitative discourse analysis to address biases.

Why You Care

Have you ever wondered if the AI tools you use might be quietly perpetuating old biases? A new study suggests they are. Researchers have uncovered how large language models (LLMs) — the engines behind many AI applications — are generating racially biased narratives about women. This isn’t just an academic concern; it directly impacts the stories AI tells and the perceptions it shapes for you and your audience.

What Actually Happened

A team of researchers investigated racial biases in short stories generated by large language models, specifically LLaMA 3.2-3B. The study focused on narratives about Black and white women written in Portuguese, according to the announcement. They analyzed a significant dataset of 2100 texts. The goal was to understand how these AI systems construct stories and if they reflect existing societal inequalities. They used computational methods to group semantically similar stories, allowing for a detailed qualitative analysis, the research shows.

Why This Matters to You

This research highlights a crucial issue: even seemingly neutral AI can carry hidden biases. Imagine you’re a content creator using AI to generate character backgrounds. If the AI inherently associates certain racial groups with specific, often stereotypical, narratives, your content could unknowingly reinforce harmful ideas. This isn’t about AI being ‘evil,’ but about it reflecting the biased data it was trained on.

The study identified three main discursive representations in the AI-generated stories:

  • Social overcoming: Narratives focusing on overcoming adversity.
  • Ancestral mythification: Stories emphasizing historical or mythical lineage.
  • Subjective self-realization: Tales centered on personal growth and identity.

For example, if an AI consistently portrays Black women primarily through narratives of ‘social overcoming,’ it reinforces a narrow, potentially stereotypical view. This limits the diversity and authenticity of stories AI can tell. How might these embedded biases influence the characters and plots in your own AI-assisted creative projects?

“The analysis uncovers how grammatically coherent, seemingly neutral texts materialize a crystallized, colonially structured framing of the female body, reinforcing historical inequalities,” the paper states. This means the AI isn’t just making up stories; it’s echoing deeply ingrained societal biases.

The Surprising Finding

Here’s the twist: the AI models generated texts that were grammatically coherent and appeared neutral on the surface. However, beneath this veneer of neutrality, the study found a “crystallized, colonially structured framing of the female body.” This is surprising because you might expect AI to be objective. Instead, it subtly reinforces historical inequalities, as mentioned in the release. The AI doesn’t invent these biases; it learns them from vast datasets that reflect human society’s past and present prejudices. This challenges the assumption that AI is inherently impartial. It reveals that the biases are deeply embedded in the way AI understands and represents the world, particularly concerning race and gender.

What Happens Next

The study proposes an integrated approach combining machine learning techniques with qualitative, manual discourse analysis. This suggests a future where AI creation includes more rigorous human oversight to identify and mitigate biases. For content creators, this means you should critically evaluate AI-generated content for subtle biases, especially when dealing with diverse characters. Think of it as a quality control step for ethical AI use. Over the next 12-18 months, we might see AI developers implement new methods for bias detection and correction in their models. The industry implications are significant, pushing for more responsible AI creation. This research serves as a call to action for developers and users alike to address these systemic issues proactively.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice