OpenAI is incentivizing security researchers with a $25,000 reward to find vulnerabilities in its new AI model, GPT-5.5, ...
OpenAI is incentivizing security researchers with a $25,000 reward to bypass the safety guardrails of its latest AI model, GPT-5.5, through a bio bug bounty programme.
OpenAI has opened a GPT-5.5 Bio Bug Bounty programme that invites selected researchers to test whether the company’s latest model can be pushed past safeguards designed to block dangerous biological ...
OpenAI has launched a restricted Bio Bug Bounty for its new GPT‑5.5 model, offering up to $25,000 for a single 'universal jailbreak' that can bypass all five questions in a biosafety challenge without ...