OpenAI appears to be preparing a new coding model called GPT-5.1-Codex-MAX, with early references pointing to a version designed for larger software projects and long-running development tasks. The description found in the codebase states that it will be both smarter and faster, and positioned as a model that can handle project-scale workloads rather than focusing only on short tasks or isolated files. This direction suggests that OpenAI is targeting one of the biggest limitations of current coding assistants across the industry: they struggle with large repositories where models must repeatedly scan or reconstruct understanding of code that does not fit into a single context window.

The hints about long-running project work imply an internal mechanism for retaining or reconstructing repository-level knowledge without repeatedly ingesting full code trees. This is the area where many agent systems face complexity, since the moment code exceeds context limits, the agent must maintain its own structured memory or indexing. Anthropic’s Claude MAX, with its larger 500k context window that reaches into enterprise-only scale, has partly addressed this, but even such solutions still hit upper bounds. The wording in OpenAI’s upcoming announcement does not confirm a larger context window, although the comparison to Claude MAX in this space raises the possibility that OpenAI may explore similar scaling. What seems more likely is a move to faster compute and a different architecture or retrieval mechanism for navigating big repositories.
The leaked feature shows that OpenAI is trying to solve a gap that none of the major systems fully cover today. A model capable of stable, multi-file, multi-step, long-horizon coding tasks would shift expectations for AI-assisted software engineering, especially as the competitive pressure from the Gemini 3 launch increases. Since the announcement has only recently appeared in the codebase, the timing suggests that OpenAI could be preparing to roll this model out soon, potentially within days.