SambaNova Unveils Fastest Chip for Agentic AI, Collaborates with Intel, and Raises $350M+

New SN50 chip boasts 5X faster speeds than competitors, enabling more efficient AI at scale.

Published on Feb. 26, 2026

SambaNova, a leader in next-generation AI infrastructure, has unveiled its new SN50 AI chip that boasts speeds up to 5 times faster than competitive chips. The company also announced a planned collaboration with Intel to deliver high-performance, cost-efficient AI inference solutions, and has raised over $350 million in strategic Series E financing to expand manufacturing and cloud capacity.

Why it matters

As AI workloads become more complex and demanding, enterprises are seeking infrastructure that can handle agentic AI systems with low latency and high throughput at scale. SambaNova's SN50 chip and collaboration with Intel aim to provide a powerful, cost-effective alternative to GPU-centric solutions, enabling organizations to more efficiently deploy advanced AI models and applications.

The details

The SN50 chip is built on SambaNova's Reconfigurable Data Unit (RDU) architecture and offers several key capabilities, including instant AI experiences with ultra-low latency, unmatched scale and concurrency to power thousands of simultaneous AI sessions, breakthrough model capacity for deeper reasoning, and maximum efficiency at scale to lower cost-per-token. SambaNova is also collaborating with Intel to integrate its systems with Intel's CPUs, accelerators, and networking technologies to power scalable, production-ready AI inference.

  • SambaNova plans to begin shipping the SN50 chip to customers later this year.

The players

SambaNova

A leader in next-generation AI infrastructure, providing a full stack platform that powers the fastest, most efficient AI inference for enterprises, NeoClouds, AI labs and service providers, and sovereign AI initiatives worldwide.

Intel

A multinational corporation and technology company that has entered into a planned multi-year strategic collaboration with SambaNova to deliver high-performance, cost-efficient AI inference solutions.

Rodrigo Liang

The co-founder and CEO of SambaNova.

Kevork Kechichian

The Executive Vice President and General Manager of the Data Center Group at Intel.

Landon Downs

The co-founder and managing partner at Cambium Capital, which led the $350 million Series E funding round for SambaNova.

Got photos? Submit your photos here. ›

What they’re saying

“AI is no longer a contest to build the biggest model. With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”

— Rodrigo Liang, Co-founder and CEO of SambaNova (BusinessWire)

“Customers are asking for more choice and more efficient ways to scale AI. By combining Intel's leadership in compute, networking, and memory with SambaNova's full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”

— Kevork Kechichian, Executive Vice President and General Manager, Data Center Group, Intel (BusinessWire)

“AI is moving from a software story to an infrastructure story. SN50 is engineered for the real-world latency and economic requirements that will determine who successfully deploys agentic AI at scale.”

— Landon Downs, Co-founder and Managing Partner, Cambium Capital (BusinessWire)

What’s next

SambaNova and Intel plan to expand their collaboration to further integrate their technologies and go-to-market strategies, aiming to shape the next generation of heterogeneous AI data centers.

The takeaway

SambaNova's new SN50 chip and collaboration with Intel represent a significant advancement in the race to deliver high-performance, cost-efficient AI infrastructure that can power the next generation of agentic AI applications and services at scale.