Skip to content
AI

SambaNova Introduces New Chip Capable of Handling 5 Trillion Parameter Models

SambaNova Systems has launched its next-gen SN40L chip, promising to revolutionize large language model computing by handling up to 5 trillion parameters. The chip claims to be 30x more efficient, drastically reducing the need for multiple GPUs.

SambaNova's SN40L chip

In the rapidly evolving world of AI and machine learning, computing power is a big deal, especially when dealing with massive language models like OpenAI's GPT-4. That's why the unveiling of SambaNova's SN40L chip is making headlines.

While not as well-known as tech giants like Google or Microsoft, SambaNova has been quietly working on full-stack AI solutions, raising over $1 billion from major investors like Intel Capital, BlackRock, and SoftBank Vision Fund.

Rodrigo Liang, the company's CEO, emphasizes the need for efficiency and cost-effectiveness in running large language models. The SN40L chip is designed to specifically meet this need, offering 30x more efficiency by drastically reducing the number of chips required for such complex computations.

“We need to stop using this brute force approach of using more and more chips for large language model use cases,” Liang told TechCrunch.

One of the most striking claims about the SN40L chip is its efficiency. According to Liang, running a trillion-parameter model on competitor chips would require between 50 to 200 chips. SambaNova has purportedly reduced that requirement to just eight chips, without compromising accuracy.

Besides offering the hardware, SambaNova also delivers a full-stack software solution. This allows companies to train their own models while retaining full ownership. “It’s your data and your model,” assures Liang, emphasizing that the trained model is returned to the customer.

SambaNova is offering the SN40L as part of a multi-year subscription model, which Liang believes will provide companies with more cost certainty over their AI projects.

The SN40L chip is available starting today and is fully backward-compatible with the company's previous generation of chips.

By introducing the SN40L chip, SambaNova has made a bold move in the direction of AI computational efficiency. With the chip now available for subscription, it would be interesting to see how this changes the landscape of large language model processing and application.

Latest