From launching its new chip Thor Ultra today to its recent blockbuster partnership with OpenAI, here are five ways Broadcom is strengthening its battle against competitors Nvidia and AMD for AI hardware supremacy.
Broadcom is launching a full-court press on AI hardware competitors Nvidia and AMD as Broadcom forges a new blockbuster partnership with OpenAI and launches what it’s calling the “industry’s first” new networking chip: Thor Ultra.
With a $110 billion backlog and strong demand for its AI networking and XPU AI accelerator technologies, Broadcom CEO Hock Tan is bullish about his company’s future.
“If you do your own chips, you control your destiny,” said Broadcom’s CEO during a podcast this week announcing Broadcom’s new partnership with OpenAI. “You continue to need compute capacity — the best, latest compute capacity — as you progress in a road map towards a better and better frontier model and towards superintelligence.”
On Tuesday, Broadcom unveiled its new networking chip, Thor Ultra, that will compete head-to-head against the likes of Nvidia and AMD networking chips.
[Related: How VMware-Broadcom Clients Can Move To VCF In ‘Gradual Steps’ Vs. ‘Brute Force’]
Broadcom’s new blockbuster partnership with AI unicorn OpenAI also looks to pit the company against AI stars Nvidia and AMD as Broadcom and OpenAI will build, deploy and deliver accelerator and network systems purpose-built for AI clusters.
The San Jose, Calif.-based tech giant, which also owns VMware, currently has a market cap of $1.65 trillion, making it one of the top ten most valuable companies on the planet.
Here are the five biggest things channel partners, IT leaders and customers need to know about Broadcom’s battle against Nvidia and AMD as it launches new chip innovation and forms a blockbuster partnership with OpenAI.
Broadcom Unveils New Networking Chip: Thor Ultra
On Tuesday, Broadcom unveiled a new networking chip, Thor Ultra, that lets customers build AI computing systems by sticking together thousands of chips that will compete against Nvidia and AMD.
Thor Ultra is the industry’s first 800G AI Ethernet Network Interface Card (NIC), capable of interconnecting hundreds of thousands of XPUs to drive trillion-parameter AI workloads, according to Broadcom.
“Thor Ultra delivers on the vision of Ultra Ethernet Consortium (UEC) for modernizing RDMA [remote direct memory access] for large AI clusters,” said Ram Velaga, senior vice president and general manager of the Core Switching Group at Broadcom, in a statement. “Designed from the ground up, Thor Ultra is the industry’s first 800G Ethernet NIC and is fully feature compliant with UEC specification.”
Thor Ultra enables computing infrastructure operators to deploy more chips than they otherwise could, Broadcom said, allowing clients to build and run the large models used to power AI apps such as ChatGPT.
Thor Ultra introduces a suite of UEC-compliant, advanced RDMA innovations including Packet-Level multipathing for efficient load balancing; selective retransmission for efficient data transfer; and Out-of-Order packet delivery directly to XPU memory for maximizing fabric utilization.
The Takeaway: Broadcom’s new Thor Ultra will no doubt go head-to-head against Nvidia’s and AMD’s networking interface chips. The goal is to boost Broadcom’s control of network communications inside data centers designed for AI applications even further.
Broadcom’s Blockbuster OpenAI Partnership: 10 Gigawatts Of AI Accelerators
In a blockbuster move, OpenAI and Broadcom formed a partnership around co-developing systems including 10 gigawatts of custom AI accelerators.
OpenAI will design the accelerators and systems. Broadcom, which owns a larger enterprise customer base and manufacturing arm, will then co-develop and deploy the AI accelerators.
The racks of systems will be scaled entirely with Broadcom’s ethernet and other connectivity solutions.
These AI accelerators will be deployed across OpenAI facilities and partner data centers.
“Broadcom’s collaboration with OpenAI signifies a pivotal moment in the pursuit of artificial general intelligence,” said Broadcom CEO Tan in a statement. “OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI.”
For Broadcom, this collaboration reinforces its importance in custom accelerators and the choice of Ethernet as the technology for scale-up and scale-out networking in AI data centers.
The Takeaway: OpenAI will design the chips versus buying them from Nvidia and AMD, like it traditionally has. Broadcom’s OpenAI partnership is a strategy designed to help Broadcom win over market share from Nvidia and AMD, while OpenAI may move towards more autonomy away from needing Nvidia and AMD.
Broadcom’s XPUs Business
Broadcom has been one of the biggest beneficiaries of the generative AI boom, as hyperscalers and other large companies have been snapping up its custom AI chips, which the company calls XPUs.
There have been three customers in particular that are spending billions on buying Broadcom XPUs.
Broadcom doesn’t name these large customers, but market analysts have said its first three clients were Google, Meta and ByteDance. These three companies provide arguably the largest social media platforms in the world via ByteDance’s TikTok platform, Google’s YouTube and Meta’s platforms of Facebook and Instagram.
Broadcom has won a handful of massively large XPU orders for its systems based on its XPU custom-designed accelerators, which are ASICs that it co-designs with clients.
“AI networking demand continues to be strong because networking is becoming critical as LLMs continue to evolve in intelligence and compute clusters have to grow bigger,” Tan (pictured) said during Broadcom’s recent quarterly earnings report in September. “The network is the computer, and our customers are facing challenges as they scale to clusters beyond 100,000 compute nodes.”
In addition to co-designing the XPUs, Broadcom helps integrate systems together with memory and networking via its Jericho and Tomahawk line of high-perform Ethernet switches.
The Takeaway: Broadcom is inking massive multibillion-dollar deals with some of the largest buyers of AI hardware. Years ago, these large customers would have likely made similar deals with Nvidia or AMD. Additionally, Broadcom inked a new $10 billion deal with another large customer. (Click on the next slide for more information on this.)
OpenAI Is Not The Mystery $10 Billion Customer
In September, Broadcom announced it had hooked a new, unnamed $10 billion customer deal for its custom XPU data center AI chips.
“One of these prospects released production orders to Broadcom, and we have accordingly characterized them as a qualified customer for XPUs and, in fact, have secured over $10 billion of orders of AI racks based on our XPUs,” said Broadcom CEO Hock during his company’s quarterly financial report last month.
Many speculated that OpenAI was the new $10 billion customer as Broadcom does not disclose its large web-scale and cloud customers.
However, Broadcom’s Charlie Kawwas said this week that OpenAI is not the $10 billion mystery customer that it disclosed during its financial report.
“I would love to take a $10 billion PO (purchase order) from my good friend [OpenAI President] Greg [Brockman],” said Kawwaw, president of Broadcom’s semiconductor solutions group, in an interview with CNBC. “He has not given me that PO yet.”
Because of this new, fourth massive customer, Broadcom’s increased its AI revenue forecast for 2026.
The Takeaway: With OpenAI not being the mystery customer, Broadcom’s future in AI hardware is bright as its fight against Nvidia and AMD will now be elevated with billions of dollars in additional AI revenue.
Broadcom-OpenAI Can Take On Nvidia And AMD Prices
One of the big reasons Broadcom teamed with OpenAI is to make AI infrastructure much more affordable compared to its competitors, the leaders of both companies said.
OpenAI’s President Brockman (pictured) said his company used its own models to accelerate chip design and improve efficiency on Broadcom hardware.
“We’ve been able to get massive area reductions,” Brockman said in a podcast announcing the Broadcom partnership. “You take components that humans have already optimized and just pour compute into it, and the model comes out with its own optimizations.”
Broadcom CEO Tan said on the podcast that OpenAI is building the most innovative frontier models.
“If you do your own chips, you control your destiny,” Tan said. “You continue to need compute capacity—the best, latest compute capacity—as you progress in a road map towards a better and better frontier model and towards superintelligence.”
Broadcom plans to deploy its racks of AI accelerators and network systems with OpenAI starting in the second half of 2026 and be complete by end of 2029.
The Takeaway: Broadcom’s OpenAI partnership is designed to create integrated AI solutions that drive efficiency and cost reductions versus competitors. Having lower costs on AI infrastructure and systems compared to Nvidia and AMD—within a one-year timeframe—could be huge.