LLMs Mimic Human Social Networks with Surprising Accuracy

New research reveals how large language models form connections like people do.

A recent study shows that large language models (LLMs) can reproduce human-like social network dynamics. This finding has implications for AI system design and social simulation. It also raises questions about bias and fairness in AI.

Sarah Kline

By Sarah Kline

August 30, 2025

4 min read

LLMs Mimic Human Social Networks with Surprising Accuracy

Key Facts

  • Large language models (LLMs) can reproduce human-like social network dynamics.
  • LLMs mimic micro-level principles like preferential attachment, triadic closure, and homophily.
  • LLMs also reproduce macro-level properties such as community structure and small-world effects.
  • The emphasis of these principles adapts to context, mirroring human social patterns (e.g., homophily in friendship, heterophily in organizations).
  • A human-subject survey confirmed strong alignment between LLM and human link-formation decisions.

Why You Care

Ever wonder if artificial intelligence could truly understand human relationships? Could an AI form a ‘friendship’ or even a professional network? New research reveals that large language models (LLMs) surprisingly mimic how you and I connect. This discovery could change how we design AI systems and simulate social interactions. What if AI could predict social trends with accuracy?

What Actually Happened

A new paper titled “Network Formation and Dynamics Among Multi-LLMs” explores how multiple LLM agents interact. The research, conducted by Marios Papachristou and Yuan Yuan, investigates whether LLM interactions approximate human-like network dynamics. According to the announcement, they developed a structure to study these behaviors. They then benchmarked these LLM agents against human decisions. The team revealed that LLMs consistently reproduce fundamental micro-level principles. These include preferential attachment (where popular nodes attract more connections), triadic closure (friends of friends become friends), and homophily (liking those similar to you). What’s more, they also replicate macro-level properties like community structure and small-world effects (everyone is connected through a few steps). This study spans various settings, including friendship, telecommunication, and employment networks.

Why This Matters to You

This research has significant implications for how we use and interact with AI. Imagine a future where AI can help design more cohesive teams or predict social trends. The study finds that LLMs can adapt their connection emphasis based on context. For example, LLMs favor homophily in friendship networks. However, they prefer heterophily (connecting with those different from them) in organizational settings. This mirrors patterns of social mobility in human behavior. A controlled human-subject survey confirmed strong alignment. The survey showed that LLMs and human participants made similar link-formation decisions. This means LLMs can serve as tools for social simulation and synthetic data generation. How might your daily interactions with AI change if it truly understood social nuances?

Key Principles LLMs Reproduce:

  • Preferential Attachment: Nodes with more connections attract even more connections.
  • Triadic Closure: If two people share a friend, they are more likely to become friends themselves.
  • Homophily: The tendency for individuals to associate and bond with similar others.
  • Community Structure: The organization of networks into distinct groups or clusters.
  • Small-World Effects: The idea that any two people in a social network are connected by a short chain of acquaintances.

The Surprising Finding

Here’s the twist: the relative emphasis of these social principles adapts to context. This is a surprising finding. The paper states that LLMs favor homophily in friendship networks. Yet, they exhibit heterophily in organizational settings. This directly mirrors human social mobility patterns. It challenges the assumption that AI would apply social rules rigidly. Instead, these models demonstrate a nuanced understanding of social context. This adaptability is essential for realistic social simulations. It also raises important questions about how these models learn such subtle distinctions. The study finds that this adaptability is a core strength of LLMs in social contexts.

What Happens Next

This research paves the way for exciting future applications. We might see AI-powered social simulators within the next 12-18 months. These tools could help urban planners design more connected communities. They could also assist businesses in fostering better team dynamics. The team revealed that LLMs can serve as tools. They are useful for social simulation and synthetic data generation. However, this also raises essential questions. These include concerns about bias, fairness, and the design of AI systems. These systems will participate in human networks. Your role in shaping these discussions will be crucial. We need to ensure these AI systems are developed responsibly. They must reflect diverse human values. As mentioned in the release, the next steps involve addressing these complex ethical considerations.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice