Kim Kardashian's ChatGPT 'Frenemy' Reveals AI's Quirky Flaws

A celebrity's relatable struggle with AI highlights common large language model limitations.

Kim Kardashian labels ChatGPT a 'frenemy' after it provides incorrect legal advice, causing her to fail tests. This anecdote underscores the ongoing issue of AI hallucinations and the importance of human oversight, even for advanced large language models (LLMs).

Sarah Kline

By Sarah Kline

November 7, 2025

4 min read

Kim Kardashian's ChatGPT 'Frenemy' Reveals AI's Quirky Flaws

Key Facts

  • Kim Kardashian uses ChatGPT for legal advice.
  • ChatGPT's responses are often incorrect, causing Kardashian to fail tests.
  • ChatGPT is prone to 'hallucinations,' providing fake answers.
  • Kardashian attempts to appeal to ChatGPT's emotions, but it lacks feelings.
  • Some lawyers have been sanctioned for relying on ChatGPT for legal briefs.

Why You Care

Ever asked an AI for help, only to find its answers wildly off-base? What if that bad advice made you fail a test? Kim Kardashian recently shared her surprising experiences with ChatGPT, calling it her ‘frenemy.’ This celebrity anecdote isn’t just entertainment; it highlights a crucial lesson about relying on artificial intelligence. You need to understand AI’s limitations before it impacts your own work or decisions.

What Actually Happened

Kim Kardashian uses ChatGPT for what she believes is legal advice, according to the announcement. She takes pictures of questions and inputs them into the AI. However, she reports that the answers are “always wrong.” This has led to her failing tests, prompting her to “yell at it” and accuse it of making her fail. The research shows that ChatGPT is prone to ‘hallucinations.’ This means the large language model (LLM) can create fake answers instead of admitting it lacks confidence in a response. The paper states that this system is not programmed to know what information is ‘correct.’ Instead, it predicts the most likely response based on massive data, which may not be factually accurate. Some lawyers have even faced sanctions for using ChatGPT to write legal briefs, as detailed in the blog post.

Why This Matters to You

Kardashian’s experience with ChatGPT offers a valuable lesson for everyone. It shows that even AI tools require careful human verification. Imagine you’re using an AI to draft an important email or research a complex topic. Without double-checking, you could inadvertently spread misinformation or make essential errors. As mentioned in the release, ChatGPT does not possess feelings. However, our human reactions to its shortcomings are very real. Your frustration with an incorrect AI response is a common sentiment.

Key Takeaways for AI Users:

  • Verify Information: Always cross-reference AI-generated content with reliable sources.
  • Understand Limitations: Recognize that LLMs can ‘hallucinate’ or provide inaccurate data.
  • Maintain Oversight: Human judgment remains essential for essential tasks.
  • Emotional Detachment: AI lacks emotions, so don’t expect it to understand your frustration.

What steps do you take to verify information from AI tools in your daily life? Kardashian even tries to appeal to ChatGPT’s emotions. She asks, “Hey, you’re going to make me fail, how does that make you feel that you need to really know these answers?” The company reports that ChatGPT’s response was, “This is just teaching you to trust your own instincts.” This interaction underscores the AI’s lack of sentience and its inability to truly empathize.

The Surprising Finding

Here’s the twist: despite her frustrations, Kardashian continues to use ChatGPT. This is surprising because one might expect her to abandon a tool that consistently provides incorrect legal advice. The team revealed that she screenshots conversations with ChatGPT. She then shares them with her group chat, expressing disbelief at the AI’s responses. This challenges the assumption that users will simply stop using tools that fail them. Instead, it seems people develop a complex, almost personal relationship with AI, even when it’s flawed. Her continued engagement, despite its inaccuracies, highlights a deeper human inclination. We often try to interact with system on an emotional level, even when it’s clearly not designed for it.

What Happens Next

The ongoing evolution of large language models like ChatGPT will focus heavily on improving factual accuracy. We can expect significant advancements in ‘truthfulness’ algorithms within the next 12-18 months. For example, future versions might integrate more real-time fact-checking capabilities. This could reduce instances of AI hallucinations. For you, this means potentially more reliable AI assistants in late 2025 or early 2026. However, the industry implications are clear: human oversight will remain paramount. Always treat AI outputs as a starting point, not a final answer. Your essential thinking skills will continue to be your most valuable asset when interacting with these , yet imperfect, tools.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice