Skip to main content
06.03.2026

AI Security Auditing: How Claude Found 22 CVEs in Firefox

head-image

Mozilla and Anthropic recently announced a security collaboration that should get every SRE's attention. Claude Opus 4.6 found more vulnerabilities in Firefox in February 2026 than were reported in any single month throughout 2025. The implications for security operations are significant.

What Happened

Anthropic's Frontier Red Team used Claude to analyze Firefox's codebase, starting with the JavaScript engine. Within twenty minutes of exploration, Claude identified its first Use After Free vulnerability. By the end of the two-week engagement, the model had scanned nearly 6,000 C++ files and reported 112 unique issues.

Mozilla validated 22 CVEs from these findings. The fixes shipped in Firefox 148.0, protecting hundreds of millions of users.

Why This Matters for SREs

Firefox has been one of the most heavily audited open-source codebases for over two decades. It undergoes continuous fuzzing, static analysis, and manual security review. Despite this, Claude found bugs that existing tools missed.

Key findings for operations teams:

  • Speed: Claude identified vulnerabilities faster than traditional methods
  • Scale: The model processed thousands of files systematically
  • Coverage: It found logic errors that fuzzers typically miss
  • Patches: Claude also proposed candidate fixes, validated by humans

This is not theoretical. Mozilla engineers validated each finding and landed fixes within hours.

The Attack vs Defense Gap

Claude is currently far better at finding vulnerabilities than exploiting them. Anthropic tested exploitation capabilities and found the model could only turn bugs into working exploits in rare cases. This gives defenders an advantage, but that gap may not last.

Practical Takeaways

For teams considering AI-assisted security auditing:

  1. Task verifiers matter: Claude works best when it can check its own work with automated tests
  2. Minimal test cases help triage: Bug reports with reproducible examples get fixed faster
  3. Candidate patches accelerate fixes: AI-generated patches need human review but speed up resolution
  4. Scope carefully: Start with isolated components like specific engines or services

Mozilla has already started integrating AI-assisted analysis into their internal security workflows.

Getting Started

Anthropic recently released Claude Code Security in limited preview, bringing vulnerability discovery directly to developers. For teams managing complex codebases, this represents a new defensive tool worth evaluating.

The window where AI helps defenders more than attackers may not stay open forever. Now is the time to adopt these tools.

Conclusion

AI-powered security auditing is no longer experimental. The Mozilla-Anthropic collaboration proves it works at scale on production codebases. SRE teams should consider integrating AI analysis into their security pipelines.

To learn how Akmatori helps teams automate security and operations workflows with AI agents, visit akmatori.com. For enterprise-grade cloud infrastructure, explore Gcore.

Automate incident response and prevent on-call burnout with AI-driven agents!