OpenAI Launches GPT-5 Bio Bug Bounty Program

The AI giant is inviting security experts to 'jailbreak' its new GPT-5 model for biological risks.

OpenAI has launched a Bio Bug Bounty program for its upcoming GPT-5 model. Researchers are challenged to find universal 'jailbreaks' that bypass safety measures related to biological and chemical risks. This initiative aims to strengthen the model's safeguards before wider release.

Katie Rowan

By Katie Rowan

September 11, 2025

3 min read

OpenAI Launches GPT-5 Bio Bug Bounty Program

Key Facts

  • OpenAI launched a Bio Bug Bounty for its GPT-5 model.
  • The program seeks 'universal jailbreaks' for bio/chem safety challenges.
  • Rewards include $25,000 for a universal jailbreak and $10,000 for multiple jailbreak prompts.
  • Applications are open from August 25 to September 15, 2025.
  • Testing for the program begins on September 16, 2025.

Why You Care

Ever wonder how safe the most AI truly is, especially when it comes to sensitive areas like biology? Imagine an AI capable of answering dangerous questions. OpenAI is actively testing its new GPT-5 model to prevent such scenarios. This affects you because the safety of AI impacts everyone. Your digital future, and even your physical safety, could depend on these safeguards.

What Actually Happened

OpenAI has announced a Bio Bug Bounty program for its GPT-5 model. According to the announcement, this initiative aims to strengthen safeguards for AI capabilities in biology. The company reports that GPT-5 has already been deployed internally. OpenAI is actively working to enhance its safety protections. They are inviting experienced researchers to participate. These experts should have backgrounds in AI red teaming, security, or chemical and biological risk. The goal is to find a ‘universal jailbreak’ that can defeat a ten-level bio/chem challenge. This means bypassing safety filters with a single prompt.

Why This Matters to You

This program is crucial for ensuring the responsible creation of AI. It directly addresses potential misuse. Think of it as a quality assurance check for AI safety. Your personal data and the broader societal impact of AI depend on these rigorous tests. The company reports specific rewards for successful participants. This highlights the seriousness of their commitment.

Bio Bug Bounty Rewards:

  • $25,000: Awarded to the first true universal jailbreak clearing all ten questions.
  • $10,000: Given to the first team answering all ten questions with multiple jailbreak prompts.
  • Smaller awards: May be granted for partial wins at OpenAI’s discretion.

Imagine you are a security expert. This is your chance to contribute to AI safety. The program seeks a universal jailbreaking prompt. This prompt must successfully answer all ten bio/chem safety questions. It needs to do this from a clean chat without triggering moderation. As detailed in the blog post, this is a highly targeted and specific challenge. How might such a ‘jailbreak’ be used for harm if not identified and patched?

The Surprising Finding

What might surprise you is the very existence of this bug bounty. It openly acknowledges the potential for AI models to be ‘jailbroken.’ This means bypassing their built-in safety mechanisms. The study finds that OpenAI is proactively seeking these vulnerabilities. They are doing this even before a wider release of GPT-5. This challenges the common assumption that AI safety is a purely internal process. Instead, they are crowdsourcing the identification of risks. The team revealed that the program focuses on ‘universal jailbreaks.’ This suggests a single prompt could unlock multiple dangerous capabilities. This proactive approach is a significant step. It shows a commitment to external validation of safety.

What Happens Next

The application period for the Bio Bug Bounty is short. Applications opened on August 25, 2025. They will close on September 15, 2025. Testing is set to begin immediately after, on September 16, 2025. This rapid timeline indicates the urgency of their safety efforts. For example, if you are a red teaming expert, you have a narrow window to apply. The program is application and invite-only. This ensures a high level of expertise among participants. All findings and communications are covered by an NDA (Non-Disclosure Agreement). This maintains the confidentiality of discovered vulnerabilities. The industry implications are clear. Other AI developers may adopt similar proactive bug bounty programs. This could become a standard practice for frontier AI models. Apply now and help us make frontier AI safer, as mentioned in the release.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice