Meta partners with Broadcom for custom AI chip development

Meta expands its partnership with Broadcom to co-develop custom MTIA chips for AI workloads, boosting its internal infrastructure.

· 2 min read
Meta

Meta has moved deeper into custom AI hardware, announcing an expanded partnership with Broadcom to co-develop multiple generations of MTIA, the company’s Meta Training and Inference Accelerator chips. The plan covers chip design, advanced packaging, and networking, with Meta framing the deal as part of the compute base it needs to run AI products and real-time experiences across Facebook, Instagram, WhatsApp, Messenger, and its wider services. Meta said the first phase of deployment will exceed 1 gigawatt of custom silicon capacity and then grow into a multi-gigawatt rollout over time.

The hardware itself is aimed at Meta’s internal infrastructure rather than the public market. MTIA is built for inference and recommendation workloads at scale, and Meta has said it already deploys hundreds of thousands of MTIA chips in its data centers for organic content and ads. In March, the company said it was accelerating its roadmap to four new chip generations within two years, with MTIA 300 already in production for ranking and recommendation training, and MTIA 400, 450, and 500 positioned to take on broader workloads, especially generative AI inference into 2027. Meta’s argument is that purpose-built silicon can deliver higher efficiency and lower total cost than relying only on general-purpose AI chips.

Broadcom’s role goes well beyond acting as a manufacturing partner. Meta said the partnership is built on Broadcom’s XPU platform, while Broadcom’s Ethernet and cluster networking technologies are expected to connect Meta’s expanding AI compute fleets. Broadcom has recently highlighted its 3.5D XDSiP modular XPU platform, Tomahawk 6 switching, Jericho 4 fabrics for very large XPU clusters, and 800G AI NIC technology, all of which point to the same goal: building larger AI systems with higher bandwidth and lower power overhead. That makes this announcement important because it shows Meta is trying to control not just model development, but the full stack underneath it, from CPUs and accelerators to the network fabric tying clusters together.

This also places Broadcom inside a much wider Meta infrastructure push. Over the past two months, Meta has separately announced AI infrastructure partnerships with NVIDIA, AMD, and Arm, while keeping MTIA at the center of its own silicon strategy. The result is a portfolio model: external partners for parts of the stack, and Meta-designed chips for the workloads it believes it can optimize most aggressively. Broadcom’s CEO Hock Tan will also leave Meta’s board and shift into an advisor role focused on Meta’s silicon roadmap, underlining how strategic the deal has become for both companies.

Source