Nvidia’s recent networking revenue milestone is a sign of how the company’s dominance of the AI infrastructure market is starting to extend well beyond AI chips into other product categories where it did not participate 10 years ago.
Nvidia’s push to provide the essential components and systems for AI infrastructure received a major validation point last month when the company revealed that its annual networking revenue surpassed that of Cisco’s.
While Nvidia CEO Jensen Huang did not mention Cisco by name in its latest earnings call, he repeated a point that the company made in its fourth-quarter earnings presentation pointing to the milestone: “Nvidia is the world’s largest networking business.”
This was based on networking revenue within Nvidia’s data center business reaching $31 billion for its 2026 fiscal year, which ended in late January, representing a 142 percent increase from the previous year. It was also “up more than 10 times” from when Nvidia acquired the foundation for the business, Mellanox Technologies, back in 2020.
By contrast, Cisco made $28 billion in networking revenue for its 2025 fiscal year, which ended last July. And even for their most recent quarters that recently ended, Nvidia made nearly $3 billion more than Cisco did in networking.
It’s a sign of how Nvidia’s dominance of the AI infrastructure market—where it expects hyperscalers to spend nearly $700 billion this year—is starting to extend well beyond AI chips into other product categories where the company did not participate 10 years ago. This, in turn, is putting Nvidia into more direct competition with a growing number of companies, including those it counts as partners, like Cisco.
What Drove Nvidia’s Networking Business
What drove Nvidia’s networking revenue in the fourth quarter was a “continued ramp” of its NVLink compute fabric for its Grace Blackwell GB200 and GB300 rack-scale platforms as well as growth of its Spectrum-X Ethernet and Quantum InfiniBand networking platforms.
“The invention of NVLink really turbocharged our networking business. Every rack comes with nine nodes of switches, and each one of them has two chips in it, and in the future, they’ll have more. And so the amount of switching that we do per rack is really quite incredible,” Huang said on the Feb. 25 fourth-quarter earnings call.
“We’re also now the largest networking company in the world and if you look at Ethernet, we came into the Ethernet market about a couple of years ago, into Ethernet switching. And I think that we’re probably the largest Ethernet networking company in the world today—and surely will be soon. And so Spectrum-X Ethernet has been a home run for us,” he added.
Nvidia’s Vertical Integration Push Driven By ‘Extreme Co-Design’
The company has expanded into areas like networking and CPUs because of its view that it needs to develop these technologies in tandem with each other to deliver data center-scale computers that provide the fastest and most efficient performance for AI workloads.
Nvidia calls this level of vertical integration “extreme co-design.”
“Every single generation, we are committed to deliver many X factors of performance per watt and performance per dollar, and that pace and our ability to do extreme co-design allows us to deliver that value and that benefit to the customers, and that is the single most vital thing as it relates to our value delivered,” Huang said.
Nvidia Ramps Up Competition Against Intel And AMD
Even with this “extreme co-design” push, Nvidia is beginning to see real interest in its CPUs as a stand-alone offering, signaling a greater threat to Intel and AMD.
While the company has largely focused on integrating CPUs alongside GPUs into bespoke server tray designs for rack-scale platforms like the GB300, Nvidia recently announced deals with neocloud provider CoreWeave and hyperscaler Meta to supply a stand-alone offering of its upcoming Vera CPU for their data centers.
In explaining Nvidia’s evolving view of CPUs in the data center, Huang said that the company is seeing the need for a stand-alone offering because AI applications are now learning to use tools, many of which run in CPU-only compute environments. At the same time, other tools can run in environments powered by CPUs and GPUs, he added.
“And Vera was designed to be an excellent CPU for post-training. And some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs,” he said.
Nvidia Finds A New Way To Boost Accelerated Computing
While this signals increased competition for Intel and AMD, the company is also finding new ways to expand its accelerator chip capabilities—where Nvidia is up against traditional rivals, startups and hyperscalers—through its recent non-exclusive licensing deal with AI chip designer Groq that was reportedly worth $20 billion.
As part of the deal, Nvidia hired members of Groq’s team, including co-founder and CEO Jonathan Ross, to implement Groq’s inference technology into Nvidia offerings.
“As we did with Mellanox, we will extend Nvidia’s architecture with Groq innovations to enable new levels of AI infrastructure, performance and value,” Huang said.
Huang Points To ‘Exponential’ Growth In Compute Demand
Nvidia is facing a greater threat from competitors than it has over the past several years. Just look at AMD’s deal announced on Tuesday to supply 6 gigawatts of AI infrastructure powered by its Instinct GPUs to Meta. Or the continuing success of homegrown AI chip efforts by Amazon Web Services or Google, the latter of which is reportedly considering supplying TPUs to data centers owned by customers.
Nevertheless, Huang and his lieutenant, CFO Collete Kress, gave an air of confidence and perhaps inevitability about the company’s expectation for continued growth and dominance, which it is fueling with a large research and development budget along with a massive war chest it has been using to invest in companies of all sizes.
“Our pace of innovation, particularly at our scale, is unmatched, fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-design across compute and networking, across chips, systems, algorithms and software,” Kress said on the call.
When a financial analyst asked if Nvidia is confident that hyperscalers, including cloud service providers, will continue growing their capital expenditures on AI infrastructure beyond this year, Huang said he is “confident in their cash flow growing.”
“And the reason for that is very simple,” he added. “We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere. You’re seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there’s no way to generate tokens. Without tokens, there’s no way to grow revenues. So in this new world of AI, compute equals revenues.”
This has led Huang to believe that the industry has reached an “inflection point” because of what he called “exponential” growth in the number of tokens being generated by AI models that corresponds with demand for compute.
With Nvidia set to reveal expanded offerings at its GTC 2026 event Monday, the company will illuminate exactly how it will continue to feed this new stage of demand.
“The number of tokens that are being generated has really gone exponential, and so we need to inference at a much higher speed,” Huang said in February.






