With Intel’s venture capital arm backing SambaNova Systems’ new $350 million funding round, the AI chip startup plans to tap into Intel’s ‘global enterprise, cloud and partner channels’ to drive sales of joint offerings for ‘cloud-scale AI inference.’
Intel plans to tap into its “enterprise, cloud and partner channels” for a new “multiyear strategic collaboration” it has entered with AI chip startup SambaNova Systems after acquisition talks between the two companies recently ended.
SambaNova announced the partnership Tuesday alongside a $350 million Series E funding round, which it said received “strong participation” from Intel Capital, and the unveiling of its next-generation SN50 AI chip, which it said can outperform rival products. The funding round was led by private equity firms Vista Equity Partners and Cambium Capital.
[Related: Exclusive: Intel Taps Ex-Arm, HPE Exec For Data Center Systems Post Amid AI Reorg]
Calling the SN50 the “most efficient chip for agentic AI,” the San Jose, Calif.-based startup said that the chip, set to ship later this year, is up to five times faster than competitive chips and can run agentic AI workloads at three times lower costs than GPUs.
“AI is no longer a contest to build the biggest model,” Rodrigo Liang, co-founder and CEO of SambaNova, said in a statement. “With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”
Intel: SambaNova Tie-Up Is Complementary To Its AI Strategy
SambaNova made the funding and partnership announcements after Bloomberg reported last month that discussions for Intel to acquire the startup stalled. The publication first reported on acquisition talks between the two companies last October.
The startup’s spokesperson said the acquisition deal is “not in discussion at this stage.” An Intel spokesperson declined to comment on the matter.
The Intel representative said that the company’s strategic partnership with SambaNova is meant to complement its AI infrastructure strategy, which spans from Xeon CPUs to GPUs. Last year, the semiconductor giant committed to a data center GPU road map with an annual release cadence before hiring a new GPU chief architect in January.
“Customers are asking for more choice and more efficient ways to scale AI,” Kevork Kechichian, the head of Intel’s Data Center Group, said in a statement. “By combining Intel’s leadership in compute, networking and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”
Intel CEO Lip-Bu Tan has served as chairman of SambaNova’s board since it was founded in 2017, according to his LinkedIn profile. His investment firms, Celesta Capital and Walden International, have been longtime investors. While SambaNova’s spokesperson declined to say if the firms participated in the Series E funding round or currently hold stakes in the startup, the representative called them “long-standing investors.”
SambaNova Expected To Tap Into Intel’s Sales Channels
SambaNova said the funding round along with the new Intel partnership to “deliver cloud-scale AI inference,” which it called a multibillion-dollar market opportunity, will help with the SN50’s production ramp and distribution.
The multiyear collaboration between SambaNova and Intel will focus on the delivery of “high-performance, cost-efficient AI inference solutions for AI-native companies, model providers, enterprises and government organizations around the world.”
This will involve the expansion of SambaNova’s vertically integrated AI cloud platform using Intel’s Xeon CPUs, which it said will be “supported by reference architectures, deployment blueprints and partnerships with systems integrators and software vendors.”
The startup said the combination of its systems with Intel’s CPUs, accelerators and networking technologies will power “scalable, production-ready inference for reasoning, code generation, multimodal applications and agentic workflows.”
The two companies plan to engage in co-selling and co-marketing activities for these offerings, with Intel expected to tap into its “global enterprise, cloud and partner channels to accelerate adoption across the AI ecosystem.”
SN50: SoftBank Named As First Customer; Initial Chip Details
SambaNova said the first SN50 customer is Japanese investment giant SoftBank Group, which plans to integrate the chip into next-generation AI data centers in Japan.
In addition to owning large stakes in Intel and rival Arm, SoftBank last year acquired Ampere Computing, which designs Arm-compatible CPUs, and AI chip startup Graphcore the year before as part of a new AI infrastructure push.
SambaNova said that the SN50 uses its Reconfigurable Data Unit (RDU) architecture to enable “ultra-low latency” for “real-time responsiveness” in applications like voice assistants and “power thousands of simultaneous AI sessions with consistent high performance.”
The startup has previously said that one strength of the RDU architecture is its ability to combine multiple operations into a single kernel call, eliminating additional overhead associated with launching multiple kernel calls and accelerating performance as a result.
It also said that the SN50 enables “higher hardware utilization” to lower the cost of generating tokens and improve return on investment for AI inference.
In addition, SambaNova said that the SN50 uses three tiers of memory—SRAM, HBM and DDR—to offer “breakthrough model capacity” enabling the ability to run models with more than 10 trillion parameters and over 10 million context lengths. This three-tier memory architecture is optimized by the chip’s “resident multi-model memory and agentic caching” to cut infrastructure costs for enterprise-scale AI deployments, according to the startup.
While the startup didn’t provide more details about the SN50’s three-tier memory architecture, it has explained how each memory tier is used for its previous-generation SN40L chip: DDR provides capacity for hosting hundreds of models and the ability to quickly switch them out on a single socket, HBM “holds the currently running model and caches others,” and distributed SRAM “enables high operational intensity through spatial kernel fusion and bank-level parallelism.”
The SambaNova spokesperson said its competitive claims about performance and total cost of ownership are “based on internal benchmarking of SN50 against widely deployed, current-generation GPU systems running large language models.”
While the performance boost claim is based on “end-to-end throughput gains on latency-sensitive inference workloads,” the claim around lower costs is based on “system-level modeling across representative production deployments, incorporating hardware, power, cooling, networking, and sustained utilization,” the representative added.







