Mistral AI has joined Nvidia’s new Nemotron Coalition as a founding member, placing the French lab at the forefront of a new initiative to develop open frontier foundation models with shared compute, data, evaluations, and post-training work. Nvidia has announced that the first coalition project is a base model co-developed with Mistral AI on DGX Cloud, which will serve as the foundation for the upcoming Nemotron 4 family. The coalition also includes Black Forest Labs, Cursor, LangChain, Perplexity, Reflection AI, Sarvam, and Thinking Machines Lab, indicating Nvidia's intention to transform Nemotron into a shared ecosystem rather than a single-vendor model line.
The timing is significant as Mistral has paired this partnership with the release of Mistral Small 4, a new Apache 2.0 model that integrates reasoning, coding, and multimodal input into one system. Mistral states that the model employs a 128-expert MoE architecture with 4 experts active per token, 119 billion total parameters, a 256k context window, and a new "reasoning_effort" control that allows users to choose between faster replies and deeper reasoning. The company is targeting developers, enterprise teams, and researchers who desire a single model for chat, code, document analysis, and visual tasks, eliminating the need to switch between different model families.
🚀Announcing a strategic partnership with NVIDIA to co-develop frontier open-source AI models, combining Mistral AI’s frontier model architecture and full-stack AI offering with NVIDIA’s leading compute infrastructure and development tools. pic.twitter.com/4jqyKHHELz
— Mistral AI (@MistralAI) March 16, 2026
From day one, availability is extensive. Mistral Small 4 is live in Mistral API and AI Studio, published on Hugging Face, supported across vLLM, llama.cpp, SGLang, and Transformers, and offered through Nvidia’s stack as both a prototype option on build.nvidia.com and a production-ready NIM deployment. Mistral also mentions that the model can be customized with Nvidia NeMo, making this announcement more than just a branding collaboration. It is a distribution and infrastructure deal aimed at rapidly integrating open models into real production environments.
For Mistral, this partnership continues an existing relationship with Nvidia. The two have previously collaborated on Mistral NeMo, and Mistral has consistently linked its open-model narrative to large-scale infrastructure access as it grows. This is crucial as the company aims to establish itself as Europe’s leading independent AI builder. In September 2025, Mistral raised €1.7 billion in a Series C funding round at an €11.7 billion post-money valuation, with Nvidia among the investors. The message is clear: Mistral aspires for open models to compete at the frontier, while Nvidia seeks to make its cloud, chips, NeMo tooling, and NIM deployment stack the backbone of that endeavor.