New Attack 'HouYi' Exposes Major LLM Application Vulnerabilities

Researchers unveil a novel prompt injection technique compromising 31 real-world AI tools.

A new research paper details 'HouYi,' a black-box prompt injection attack method. This technique successfully exploited 31 out of 36 tested LLM-integrated applications. The findings highlight significant security risks for services using Large Language Models.

Sarah Kline

By Sarah Kline

December 31, 2025

4 min read

New Attack 'HouYi' Exposes Major LLM Application Vulnerabilities

Key Facts

  • A new black-box prompt injection attack technique called HouYi has been developed.
  • HouYi successfully exploited 31 out of 36 actual LLM-integrated applications tested.
  • The attack can lead to unrestricted arbitrary LLM usage and application prompt theft.
  • 10 vendors, including Notion, have validated the researchers' findings.
  • The study highlights significant security risks in current LLM-integrated applications.

Why You Care

Have you ever wondered if your favorite AI tool could be turned against you? A recent study reveals a essential security flaw affecting many popular AI-powered applications. This vulnerability, known as a prompt injection attack, could expose your data or allow unauthorized use of services. It’s not just a theoretical risk; it’s happening now. This news directly impacts your digital security and the reliability of the AI tools you use daily.

What Actually Happened

Researchers have identified a significant security vulnerability in applications integrated with Large Language Models (LLMs). According to the announcement, this vulnerability allows for a novel type of attack called ‘prompt injection.’ The team, led by Yi Liu, developed a new black-box prompt injection technique named HouYi. This method draws inspiration from traditional web injection attacks, as detailed in the blog post. HouYi works by inserting malicious instructions into the prompts given to LLMs. These instructions can then manipulate the LLM’s behavior or extract sensitive information. The study initially explored ten commercial applications to understand current attack limitations. This led to the creation of HouYi, designed to overcome those constraints.

Why This Matters to You

This isn’t just a technical paper; it has real-world consequences for your digital interactions. The research shows that HouYi can achieve severe outcomes. These include unrestricted arbitrary LLM usage and uncomplicated application prompt theft. Imagine your AI assistant suddenly performing tasks you didn’t authorize. Or perhaps, your custom prompts, which took hours to refine, are stolen. This could compromise your work and privacy. For example, if you use an AI writing assistant, an attacker could inject a prompt. This might make the assistant generate harmful content or reveal your private writing styles. What kind of sensitive data might be exposed if your AI tools are compromised?

The team revealed that they deployed HouYi on 36 actual LLM-integrated applications. A staggering 31 applications were found susceptible to prompt injection. The company reports that 10 vendors have validated their discoveries. This includes Notion, a popular productivity tool, which has the potential to impact millions of users. This indicates a widespread issue, not just an isolated incident. Your reliance on these tools means you need to be aware of these risks.

Attack OutcomeDescription
Unrestricted LLM UsageAttackers can make the LLM perform actions without user consent.
Application Prompt TheftMalicious actors can steal proprietary or sensitive prompts.
Data ExposurePotentially sensitive user data processed by the LLM could be revealed.
Malicious Content GenerationLLMs could be coerced into producing harmful or inappropriate outputs.

The Surprising Finding

The most surprising aspect of this research is the sheer scale of vulnerability. The study finds that a high percentage of real-world applications are susceptible. Out of 36 applications , 31 were vulnerable. This challenges the common assumption that integrating LLMs into existing services automatically provides security. It suggests that many developers may not be fully accounting for these specific types of attacks. The team revealed that even well-known platforms like Notion were affected. This highlights a significant oversight in current LLM application creation. It underscores the need for more rigorous security protocols. The ease with which HouYi achieved these results is particularly alarming. This indicates a fundamental gap in how these systems are currently protected.

What Happens Next

This research illuminates both the possible risks and tactics for mitigation. We can expect vendors to roll out security patches and updates in the coming months. For example, Notion and other affected companies will likely implement stronger input validation. They will also improve context isolation measures. As mentioned in the release, users should stay vigilant for updates from their AI tool providers. You should also be cautious about the information you input into any LLM-integrated application. Industry implications are significant, pushing for new security standards in AI creation. This will likely lead to more secure LLM integration practices by early to mid-2025. The paper states that this investigation will help shape future security frameworks. This helps protect your data and ensures the integrity of AI services.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice