LLM Memory Leaks: How Network Structure Exposes Your Data

New research reveals that the way AI agents are connected directly impacts how much private information they might leak.

A recent study introduces MAMA, a framework to measure memory leakage in multi-agent Large Language Model (LLM) systems. It finds that the network topology of these AI agents plays a crucial role in how easily sensitive data, like Personally Identifiable Information (PII), can be extracted. This research highlights significant security implications for AI applications.

Katie Rowan

By Katie Rowan

December 12, 2025

4 min read

LLM Memory Leaks: How Network Structure Exposes Your Data

Key Facts

  • The MAMA (Multi-Agent Memory Attack) framework measures memory leakage in multi-agent LLM systems.
  • Graph topology, or network structure, is a fundamental determinant of memory leakage.
  • The framework uses synthetic documents with labeled Personally Identifiable Information (PII) entities.
  • The MAMA protocol involves 'Engram' (seeding PII) and 'Resonance' (extracting PII) phases.
  • Leakage is quantified as the fraction of ground-truth PII recovered over up to 10 interaction rounds.

Why You Care

Ever wonder if your private data is truly safe when interacting with AI? Imagine your personal details, shared with one AI, suddenly becoming accessible to another. This isn’t just a hypothetical scenario anymore. New research reveals a essential vulnerability in multi-agent Large Language Model (LLM) systems. Your privacy could be at risk due to how these AI agents are structured. Are you confident your AI interactions are secure?

What Actually Happened

A recent paper, “Topology Matters: Measuring Memory Leakage in Multi-Agent LLMs,” introduces a new structure called MAMA (Multi-Agent Memory Attack). According to the announcement, MAMA helps quantify how network structure influences data leakage in these complex AI systems. The team revealed that the way LLM agents are connected – their “graph topology” – is a fundamental determinant of memory leakage. This means the layout of an AI network can directly impact how much sensitive information might escape. The research specifically focuses on Personally Identifiable Information (PII) entities. It investigates how easily this data can be extracted by an attacking agent.

Why This Matters to You

This research has direct implications for any system using multiple AI agents. Think of it as a team of AI assistants working together. If one assistant learns your home address, another might inadvertently reveal it if the network isn’t designed securely. The study finds that different network configurations have varying levels of vulnerability. This impacts developers building multi-agent AI systems and users interacting with them. How might this affect your daily AI interactions?

For example, consider a customer service AI system where different agents handle different parts of your inquiry. One agent might collect your account details, while another processes your order. If the system’s topology is weak, your account information could be exposed. This means your personal data, once shared, might not remain isolated within the intended agent. The team used a two-phase protocol for their measurements.

MAMA Protocol Phases:

  1. Engram: This phase involves seeding private information, like PII, into a target agent’s memory.
  2. Resonance: During this phase, an attacker agent attempts to extract the seeded information through multi-round interactions.

Over up to 10 interaction rounds, the researchers quantified leakage. This was measured as the fraction of ground-truth PII recovered from attacking agent outputs. The documentation indicates that even seemingly secure setups could be vulnerable.

The Surprising Finding

The most surprising finding is that network topology is a “fundamental determinant” of memory leakage, as mentioned in the release. This challenges the assumption that simply having multiple agents provides inherent security through isolation. Instead, the team revealed that the structure of these connections is paramount. They systematically evaluated six common network topologies, varying agent counts from 4 to 6. This included structures like fully connected, ring, chain, binary tree, star, and star-ring. The research shows that the physical or logical arrangement of AI agents can either protect or expose sensitive data. It’s not just about what an individual AI knows, but how that knowledge can flow through the network. This highlights a essential, often overlooked, aspect of AI security.

What Happens Next

This research is currently under review at ACL Rolling Review. We can expect more detailed findings and potentially best practices to emerge in the coming months. For instance, by early 2026, we might see guidelines for designing more secure multi-agent LLM architectures. Developers should begin to consider network topology as a primary security concern. For example, when building an AI-powered financial advisor, careful consideration of how different AI modules communicate is crucial. You should evaluate your current multi-agent AI systems. Ask yourself: what is their underlying topology? The industry implications are significant, pushing for a re-evaluation of current multi-agent AI security models. This will lead to more and privacy-preserving AI applications in the future.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice