Google Engages Marvell for Custom AI Inference Chips in Move Away from Broadcom

Google Engages Marvell for Custom AI Inference Chips in Move Away from Broadcom

4 Min Read

Google is negotiating with Marvell Technology to create two new AI chips: a memory processing unit and an inference-focused TPU. This move would add Marvell to Google’s current chip partners, Broadcom and MediaTek. Though no contract has been signed yet, discussions began shortly after Broadcom secured a TPU deal lasting until 2031. This highlights Google’s shift towards inference, the main compute expense, given that the custom ASIC market is forecasted to grow 45% in 2026, reaching $118 billion by 2033.

According to The Information, Google is working with Marvell Technology on two AI chips for model execution. One is a memory processing unit for Google’s existing TPUs, and the other is a TPU designed for inference—the phase in which AI models interact with users. Marvell’s role would mirror MediaTek’s with the Ironwood TPU. No formal contract has been confirmed.

These discussions follow Broadcom’s announcement of a long-term TPU and component agreement through 2031, suggesting that Google is expanding its partnerships to include Marvell, rather than replacing Broadcom. The current lineup features Broadcom for high-performance variants, MediaTek for cost-effective options, and TSMC for production. This strategy aims for diversity, not replacement.

Google launched the Ironwood, their seventh-generation TPU, as their first focused on inference, offering ten times the performance of the TPU v5p. It scales to 9,216 liquid-cooled chips, delivering 42.5 FP8 exaflops across a 10-megawatt superpod, with plans to produce millions this year. The Marvell chips would complement Ironwood, possibly targeting various workloads or budgets for Google’s growing inference-related compute needs.

The switch from training to inference as the main demand driver is altering the chip market. Training extensive models is a lengthy, intensive task, while inference runs continuously, scaling with demand. As AI gains more users, inference becomes costlier, making specialized chips a strategic advantage over general GPUs in terms of cost and efficiency.

Prior to this week’s news, Google and Marvell had a history, with reports in 2023 of a chip project codenamed “Granite Redux,” aimed at cost savings by using Marvell. Back then, Google praised Broadcom as an excellent partner, indicating engagement with several long-term suppliers.

Between 2023 and now, Google seems to have opted to maintain its relationship with Broadcom, confirmed by a through-2031 agreement. Google is adopting a diverse supplier approach, using Broadcom, MediaTek, and potentially Marvell, reflecting strategies similar to those in the automotive sector for managing supply chains.

Marvell achieved record data center revenue in its fiscal year ending February 2026, with its custom silicon business generating $1.5 billion annually across 18 cloud-provider wins. The company collaborates with Amazon, Microsoft, Meta, and Google. Nvidia invested $2 billion in Marvell, forming a partnership to connect Marvell’s chips with Nvidia’s technologies, enhancing its foothold in the GPU and ASIC domains. In December 2025, Marvell acquired Celestial AI, bolstering its connectivity platform for AI and cloud clients. CEO Matt Murphy aims for a 20% market share in custom AI chips, with expected annual revenue growth of 30% by fiscal 2027. Marvell’s stock surged about 50% this year, notably increasing by 30% in April following the Nvidia deal and Google discussions.

Marvell’s negotiations do not seem to have diminished Broadcom’s position. Broadcom holds more than 70% of the custom AI accelerators market, with AI revenue reaching $8.4 billion in the last quarter, forecasting $10.7 billion for the next. The company targets $100 billion in AI chip revenue by 2027. Broadcom’s shares rose over 6% after announcing a Google deal extension. Analysts project $21 billion in AI revenue from Google and Anthropic relationships in 2026, climbing to $42 billion in 2027, with Anthropic leveraging TPU-based computing from 2027.

The ASIC market is outpacing the GPU market, with TrendForce forecasting a 45% increase in custom chip sales in 2026, in contrast to 16% GPU shipment growth. Counterpoint Research estimates Broadcom’s market share in custom AI accelerators at about 60% by 2027, and Marvell’s at around 25%. The market is expected to reach $118 billion by 2033.

Google’s chip strategy now involves Broadcom, MediaTek, Marvell, TSMC, an in-house team, and products covering training, inference, and general cloud computing. This complexity mitigates risks associated with relying on a single supplier. The discussions with Marvell emphasize inference, where volume leads to compounding cost advantages. Google’s large-scale AI services could greatly benefit from even marginal cost reductions, aligning with the “Granite

You might also like