CyberGym
Evaluating AI Agents' Cybersecurity Capabilities with Real-World Vulnerabilities at Scale

Zhun Wang* , Tianneng Shi* , Jingxuan He, Matthew Cai, Jialin Zhang, Dawn Song
UC Berkeley
*Indicates Equal Contribution

A large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on real-world vulnerability analysis tasks. CyberGym includes 1,507 benchmark instances with historical vulnerabilities from 188 large software projects.

Leaderboard

Filters:
Rank Agent Trials % Target Vuln.
Reproduced
Date Source
Loading...

The leaderboard ranks agent performance on CyberGym Level 1, where agents receive a vulnerability description and unpatched codebase. Agents are evaluated based on their ability to reproduce target vulnerabilities by generating working PoCs.

% Target Vuln. Reproduced: Percentage of instances where the agent successfully reproduces the target vulnerabilities by generating working PoC
Trials: Number of attempts per instance. An instance is considered successful if any one trial succeeds

Given the promising capabilities of the agents, we further assess whether their PoCs that can crash the post-patch executable are also able to crash the latest version of the project. In addition, we conduct an experiment in which the agents analyze the latest codebase without any prior context to identify new vulnerabilities. Remarkably, the agents discovered 35 zero-day vulnerabilities and 17 historically incomplete patches in total, which are detailed in this section.

Overview of CyberGym

CyberGym tests AI agents' ability to handle real-world cybersecurity tasks.

We collect 1,507 benchmark instances by systematically gathering real-world vulnerabilities discovered and patched across 188 widely distributed and large-scale software projects. Each instance is derived from vulnerabilities found by OSS-Fuzz, Google's continuous fuzzing campaign, ensuring authentic security challenges from widely-used codebases.

CyberGym Overview

Benchmarking with Vulnerability Reproduction. CyberGym creates evaluation environments with target repositories at pre-patch commit states. Agents receive a vulnerability description and unpatched codebase, then must generate proof-of-concept (PoC) tests that reproduce the vulnerability by reasoning across entire codebases, often spanning thousands of files and millions of lines of code. It requires agents to locate relevant code fragments and produce effective PoCs that trigger vulnerabilities from program entry points. Agents iteratively refine PoCs based on execution feedback. Success is determined by verifying the PoC triggers on the pre-patch version but not on the post-patch version.

Open-Ended Vulnerability Discovery. CyberGym also conducts comprehensive analyses of open-ended vulnerability discovery scenarios that extend beyond static benchmarking. We deploy agents to analyze the latest codebases without prior knowledge of existing vulnerabilities. Agents are challenged to generate PoCs to probe for potential vulnerabilities, which are then validated against the latest software versions with sanitizers enabled. This setup mirrors real-world vulnerability discovery, enabling the identification of previously unknown vulnerabilities.

CyberGym's Real-World Security Impact

Beyond benchmarking, CyberGym demonstrates tangible real-world value: the agents not only reproduced known vulnerabilities but also uncovered incomplete patches and previously unknown zero-day bugs.

PoCs Generated for CyberGym Reveal Incomplete Patches. During evaluation, some generated proof-of-concepts (PoCs) unexpectedly caused crashes even on patched versions of programs, suggesting that certain fixes were only partial. Out of all generated PoCs, 759 triggered crashes across 60 projects, and manual inspection confirmed 17 cases of incomplete patches spanning 15 projects. While none of these affected the latest software releases, the results show that AI-generated PoCs can help identify flaws in existing security patches that might otherwise go unnoticed.

PoCs Generated for CyberGym Reveal Zero-Day Vulnerabilities. Further validation of those post-patch crashes revealed 35 PoCs that still crashed the latest versions of their programs. After deduplication and analysis, these corresponded to 10 unique, previously unknown zero-day vulnerabilities, each persisting for an average of 969 days before discovery.

Running Agentic Vulnerability Discovery at Scale. To test open-ended discovery, we ran OpenHands with GPT-4.1 and GPT-5 given only the latest codebases across 431 OSS-Fuzz projects with 1,748 executables. GPT-4.1 triggered 16 crashes, leading to 7 confirmed zero-days. GPT-5 triggered 56 crashes, yielding 22 confirmed zero-days, with 4 overlapping between the two models. These results confirm that modern LLM agents can autonomously discover new vulnerabilities at scale, and that performance on CyberGym correlates strongly with real-world vulnerability discovery capability.

More Key Findings

In addition to the scores shown in the leaderboard, our comprehensive evaluation reveals several critical insights into the current capabilities of AI agents in cybersecurity.

An Example of Successful Agent Trace

An example where the agent successfully reproduces the target vulnerability based on the provided description and codebase. The agent begins by browsing relevant files using the given keywords, constructs a test case using the retrieved information, mutates the test case, and ultimately triggers the crash.

Agent Trace Example

Citation

If you use this work in your research, please cite the following:

@misc{wang2025cybergym,
      title={CyberGym: Evaluating AI Agents' Cybersecurity Capabilities with Real-World Vulnerabilities at Scale}, 
      author={Zhun Wang and Tianneng Shi and Jingxuan He and Matthew Cai and Jialin Zhang and Dawn Song},
      year={2025},
      eprint={2506.02548},
      archivePrefix={arXiv},
      primaryClass={cs.CR},
      url={https://arxiv.org/abs/2506.02548}, 
}

More

Please check out more of our works: Frontier AI's Impact on the Cybersecurity Landscape , a comprehensive analysis of how frontier AI is reshaping cybersecurity and how we should respond. Also see our Frontier AI Cybersecurity Observatory , a live leaderboard tracking AI's cybersecurity capabilities across attack and defense tasks.