Anthropic has unveiled Claude Code Security, a newly integrated feature in Claude Code, now available in a limited research preview. This tool targets security teams and open-source maintainers, helping them address the growing backlog of software vulnerabilities by providing AI-driven analysis. Unlike standard static analysis tools that rely on fixed rules, Claude Code Security interprets code contextually, allowing it to identify complex vulnerabilities such as business logic flaws and access control issues often missed by conventional methods. The feature is accessible to Enterprise and Team customers, with prioritized access for open-source maintainers, and findings require human review before any changes are made.
Introducing Claude Code Security, now in limited research preview.
— Claude (@claudeai) February 20, 2026
It scans codebases for vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix issues that traditional tools often miss.
Learn more: https://t.co/n4SZ9EIklG pic.twitter.com/zw9NjpqFz9
Claude Code Security leverages the latest Claude Opus 4.6 model, which has already demonstrated an ability to detect hundreds of previously unidentified vulnerabilities in open-source projects. The system runs a multi-layered verification process, assigning severity and confidence scores to findings, and offers suggested patches for developer approval. This approach aims to reduce false positives and focus attention on issues that carry the most risk.
Anthropic, the company behind Claude, has actively developed AI-based security capabilities through internal testing, participation in Capture-the-Flag competitions, and collaborations with research institutions. The company is positioning Claude Code Security as a solution to help defenders stay ahead of attackers who may use similar AI technologies. Early users and industry experts are watching closely, noting its potential to shift how organizations approach code security and vulnerability management.