Anthropic launches AI code review tool for Claude Teams & Enterprise

What's new? Anthropic launched Code Review PR evaluation tool using AI agents to inspect errors and rank severity for team and enterprise beta;

· 1 min read
Claude

Anthropic has launched Code Review, a new automated pull request evaluation tool now available in research preview for Team and Enterprise users. This feature is designed for developers and engineering teams seeking deeper analysis on code changes, particularly those handling large volumes of code or complex projects. Code Review dispatches multiple AI-powered agents on each PR to investigate potential bugs. These agents work together to verify issues, filter out false positives, and prioritize findings by severity, delivering a summary comment and in-line notes for actionable insights. Unlike previous tools such as the Claude Code GitHub Action, this solution focuses on comprehensive review rather than speed, and it is billed based on usage, typically costing $15–25 per review depending on PR complexity.

The system adapts to the size and complexity of PRs, deploying more agents for substantial changes and fewer for minor updates. In internal use, Anthropic has observed a substantial rise in substantive review coverage, with 54% of PRs receiving detailed feedback compared to 16% previously. Code Review does not approve changes, human reviewers retain final decision authority, but it surfaces more critical issues, as seen in both Anthropic’s own workflow and among early access customers. Feedback from engineers indicates strong agreement with the tool’s findings, with an error rate below 1%. The feature is currently in beta and accessible via settings in Claude Code for eligible customers, requiring installation of a GitHub App and repository selection.

Anthropic, the company behind this launch, is known for its focus on developing reliable large-language-model-based tools for software development. By introducing Code Review, Anthropic aims to address the increasing demands on developer teams and the need for more thorough quality checks, especially as code output accelerates. This move positions Anthropic to compete more directly with other advanced code review automation tools, offering a more detailed and scalable solution for enterprise-grade projects.

Source