Broadcom’s Tomahawk Ultra: Challenging Nvidia’s Dominance in AI Networking

The race to power artificial intelligence is not just about the raw computational might of GPUs; it’s equally about the intricate dance of data between these powerful chips. As AI models grow exponentially, the ability to string together hundreds, or even thousands, of processors that work seamlessly becomes paramount. The need is  critical and is driving innovation in networking hardware, and Broadcom has just unveiled its latest contender: the Tomahawk Ultra networking processor.

For years, Nvidia has held a commanding position in the AI hardware landscape, largely due to its powerful graphics processing units (GPUs) and their interconnect technologies like the NVLink Switch (which uses a proprietary protocol) and InfiniBand (a high-performance, open standard often associated with Nvidia’s ecosystem). Nvidia’s ecosystem, often perceived as a tightly integrated and dominant solution, has made it the go-to for many AI developers and industry experts.

However, Broadcom is now making a move to challenge this dominance with the Tomahawk Ultra. This new chip is engineered to act as a sophisticated traffic controller, orchestrating the rapid flow of data between dozens or hundreds of chips housed within a data center, often within a single server rack. What sets the Tomahawk Ultra apart, according to Broadcom Senior Vice President Ram Velaga (as quoted in Reuters), is its ability to connect four times the number of chips compared to Nvidia’s NVLink Switch. Crucially, instead of relying on a proprietary protocol, Broadcom’s solution utilizes a boosted-for-speed version of Ethernet, a widely adopted and open standard. This could offer data center builders greater flexibility and potentially lower costs by avoiding vendor lock-in.

The significance of this battle lies in “scale-up” computing – the technique of ensuring closely located chips can communicate at lightning speed. This high-bandwidth, low-latency interconnectivity is fundamental for software developers to harness the immense computing horsepower required for advanced AI applications.

The market for the underlying infrastructure that supports AI is experiencing explosive growth. The global AI data center market, which includes the networking components like Broadcom’s new chip, was valued at approximately $15.02 billion in 2024 and is projected to surge to around $93.60 billion by 2032, exhibiting a compound annual growth rate (CAGR) of 26.8% during the forecast period. More specifically, the data center networking market itself is expected to grow from an estimated $42.37 billion in 2025 to around $107.79 billion by 2034, with a CAGR of 10.95%. This demonstrates the immense financial opportunity for companies providing the foundational hardware for AI.

Broadcom’s Tomahawk Ultra represents a direct and significant challenge to Nvidia’s entrenched position in the AI interconnect space. By offering a high-performance, Ethernet-based alternative that can scale to connect more chips, Broadcom is vying for a larger slice of the rapidly expanding AI infrastructure market. This competition is vital for innovation, potentially leading to more diverse and efficient solutions for the demanding world of artificial intelligence. The coming years will reveal how this strategic move reshapes the competitive landscape of AI hardware.

Disclosure: The author is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.