Startup Tenstorrent shows AI is changing computing and vice versa

Tenstorrent is one of the rush of AI chip makers founded in 2016 and finally showing product. The new wave of chips represent a substantial departure from how traditional computer chips work, but also point to ways that neural network design may change in the years to come.

tenstorrent-bajic-2020.png

TensTorrent founder and CEO Ljubisa Bajic talking remotely at the Linley Group Spring Processor Forum. 

2016 was an amazing year in the history of computing. That year, numerous experienced computer chip designers set out on their own to design novel kinds of parts to improve the performance of artificial intelligence. 

It's taken a few years, but the world is finally seeing what those young hopefuls have been working on. The new chips coming out suggest, as ZDNet has reported in past, that AI is totally changing the nature of computing. It also suggests that changes in computing are going to have an effect on how artificial intelligence programs, such as deep learning neural networks, are designed. 

Case in point, startup Tenstorrent, founded in 2016 and headquartered in Toronto, Canada, on Thursday unveiled its first chip, "Grayskull," at a microprocessor conference run by the legendary computer chip analysis firm The Linley Group. The "Spring Processor Conference" was originally going to be held in Silicon Valley, but COVID-19 turned it into a Zoom video affair. The event was very well attended, organizer Linley Gwennap told ZDNet, and interest was evident from the number of audience questions submitted via Zoom chats.

Tenstorrent was founded by chief executive Ljubisa Bajic, who previously worked for Nvidia and Advanced Micro Devices, among others. As Bajic related to ZDNet, in 2016 he was trying to figure out his next move after several years at those large chip vendors. A legendary chip designer, Jim Keller, who worked with Bajic at AMD, "told me to just go and do what interested me," said Bajic. Keller wrote a check to Bajic, providing the first funding for the company.   

The company has now received a total of $34 million in funding from Eclipse Ventures and Real Ventures, among others. The company also has offices in Austin, Texas and in Silicon Valley.

Bajic, and other chip teams, are responding to the explosion in the size of deep learning models, such as BERT, and OpenAI's "GPT2," but also even newer models such as Nvidia's "Megatron," Microsoft's "Turing NLG," and neural net models that Bajic said he couldn't talk about publicly that will have on the order of one-trillion parameters. 

The Grayskull chip is meant to be used to speed up what's called "inference," the part of AI where a trained neural network makes predictions from new data. This part of the market has traditionally been dominated by Intel microprocessors in server computers in data centers. But Nvidia has made big inroads into inference with its graphics processing units (GPUs), and numerous startup companies have announced chip designs to compete with both of those chip giants.

The chip is expected to go into production this fall, Bajic told the conference.

The only way to beat Nvidia is with vastly superior performance, Bajic told ZDNet in an interview. "Most customers are not going to switch off of Nvidia for a part that is only two times better that is basically still an engineering sample," said Bajic. "Our goal is if we can be more than ten times better than Nvidia and sustain that for a few years, then we think people will come around to us if we can achieve that."

Early results look good. In a review of the Grayskull part put out Thursday, the Linley Group's lead analyst. Linley Gwennap, writes that the Grayskull part has "excellent" performance relative to Nvidia and other startups, including a band of former Google engineers named Groq. In fact, the chip is more efficient at performing standard AI tasks than all other chips on the market, including Nvidia's, leaving aside processors that Chinese search giant Alibaba uses internally. For example, the chip can perform 368 trillion "operations per second" on a circuit board consuming just 75 watts versus parts from Nvidia and others that require 300 watts on average. (Subscription required to read Linley Group articles.)

tenstorrent-grayskull-april-2020-verson-2.png

The "Grayskull" 75-watt PCIe card.

Tenstorrent 2020

What's going on in the Grayskull chip has interesting implications for computing. One focus is lots and lots of computer memory. The Grayskull part has 120 megabytes of on-chip SRAM memory, compared to just 18 megabytes for Nvidia's "Titan RTX" part. Nvidia's approach has been to hook up its GPUs to the fastest off-chip memory. But Tenstorrent and other startups are increasingly emphasizing the role of faster on-chip memory.

For example, Groq's "TSP" chip has almost twice as much memory as Tenstorrent, at 220 megabytes. And the record for on-chip memory is held by Cerebras Systems, which also presented at Thursday's conference. Cerebras's part, the world's largest chip, called the "Wafer Scale Engine," which was unveiled in August, has a grand total of 18 gigabytes of on-chip memory.

The proliferation of memory as a larger and larger influence in the design of the chip has some startling implications for computer system design. For example, Groq's TSP has no DRAM interface. Instead, there are several connectors around the edges of the chip that are called "SERDES" that are the kind of connectors that are used in data networking. The idea, explained Dennis Abts, who spoke following Bajic, is that instead of adding external DRAM, one can combine multiple Groq chips together through the SERDES, so that all memory operations are handled by multiplying the available on-chip SRAM, with no DRAM whatsoever. Like Cerebras and Tenstorrent, and other companies such as Graphcore, the ultimate vision for Groq is that people will use many of its chips in massively parallel computers that are combine multiple boards together. Hence, the era of external DRAM may be drawing to a close, replaced by on-chip SRAM in massively parallel computers.

As far as speeding up AI, having lots of on-chip memory fulfills a bunch of functions. One is to keep memory close to the multiple on-chip computing cores. Tenstorrent has 120 computing cores on Grayskull, and Cerebras has 400,000 compute cores. The large on-chip memory is spread amongst these cores; it resides in the circuitry that is closest to the computing core, so that it takes no more than a single tick of the chip's clock for each core to access the memory it needs to read or write.

tenstorrent-grayskull-schematic-april-2020.png

A schematic of the Grayskull chip, with its 120 on-chip compute cores.

Tenstorrent 2020

As Cerebras's head of hardware engineering, Sean Lie, noted, "In machine learning, the weights and the activations are local, and there's low data reuse," he noted. "But the traditional memory hierarchy isn't built that way." Instead, general-purpose chips like Intel Xeon CPUs and GPUs spend a lot of time going all the way off the chip to external DRAM memory, which takes several clock cycles to access. By keeping the values that any one processing core is working on, instead of going away to off-chip DRAM, "the physics of local memory drives higher performance," said Lie. 

There is a secondary efficiency, notes Lie, which is reducing the duplication that happens from using off-chip memory. GPUs "turn vector-matrix multiplies into matrix-matrix multiplies," he said, meaning, they bunch several inputs together, what's known as "batching," which is the bête noir of most deep learning scientists. "It actually changes the training of machine learning," distorting results, said Lie. "Remove the large-batch multiplier is a goal," said Lie. That point was echoed by Tenstorrent's Bajic. "No more general matrix multiplication," Bajic told the conference, "No more batching."

tenstorrent-scaling-to-giant-models-2020.png

The kinds of neural net models that AI chips have to plan to handle, especially in the domain of natural language, are scaling to very, very large numbers of parameters, over a trillion, argues Tenstorrent founder and CEO Ljubisa Bajic.

Tenstorrent 2020

Instead of batching, all three companies, Tenstorrent, Groq, and Cerebras, emphasize "sparsity," where individual inputs to a neural network are treated independently by individual cores on the chip. That's where the implications become very interesting for machine learning. 

As they dispense with batching, Tenstorrent and the other companies are fixated on the concept of "sparsity," the notion that many neural networks can be processed more efficiently if redundant information is stripped away. Lie observed that there is "a large, untapped potential for sparsity" and that "neural networks are naturally sparse."

Tenstorrent's Bajic told the conference that sparsity is at the heart of the Grayskull chip. One of the big influences upon the chip's design is the way that the brain's neurons only fire some of the time — they spike. Much of the time, those neurons are idle, consuming little power. 

"Spiking neural nets are more efficient, in a sense," Bajic said. "They have conditional execution, they have branching, etc. They are not efficient in trillions of operations per watt and terabytes per second, but they have this very nice feature of implementing functionality only when needed, and as a result they have very good power efficiency."

tenstorrent-bert-early-exit-2020.png

Tenstorrent says that with a little bit of extra training, a neural network such as Google's BERT can be optimized to exit its programming tasks "early," saving on compute effort. 

Tenstorrent 2020

Hence, the Tenstorrent team has come up with a way to streamline the way a neural network is computed in silicon to be more selective, what he called "conditional execution." A neural network, including a popular natural language program, such as Google's BERT, goes through several layers of processing. It's possible to stop that neural network before it goes through all the layers, and avoid some computation, said Bajic. By "testing" if a neural network has reached a satisfactory answer mid-way through its computation, the program can be stopped early, what's known as "early exit."

That's what Bajic and team have done, designed a software program that re-trains a neural network to find the places where it might be able to stop early. "Take BERT as a pre-packaged bunch of code, Python code calling PyTorch primitives, and add a bit of code that attaches early exit," explained Bajic. "And then run a little bit of fine-tuning training, about a half hour of additional training."

The training step goes through the entire process of compute, said Bajic, but the trained model, once ready to perform inference, can stop where it reaches a sufficient prediction, and save some compute. "There will be a statement that says, if my input is high confidence, and based on existing data, it's just not going to run rest of the network." Tenstorrent customers can either run the extra half hour of training before compiling their neural networks, or the Tenstorrent software can automatically re-train the model. "You can take networks like BERT and GPT2 and run them through our black box and get all this done so you as a user do not have to assign engineers to do it, you don't have to negotiate with the machine learning team to get it done," said Bajic.

The result of tricks such as early exit can be a dramatic speed up in performance. The Grayskull chip can process 10,150 sentences per second with BERT versus the customary 2,830 sentences that most chips can process in that time.  

That's a neat trick, but it's also a change to the way that neural nets are thought about. The Grayskull part signals that what a chip can do may change how such networks are designed in future. It's like what Facebook's head of AI, Yann LeCun, pointed out a year ago: "Hardware capabilities and software tools both motivate and limit the type of ideas that AI researchers will imagine and will allow themselves to pursue." 

"The tools at our disposal fashion our thoughts more than we care to admit," LeCun has said.

What is coming into focus, then, is a world in which both computing approaches and artificial intelligence approaches will be changing simultaneously, affecting one another in a symbiotic way. That means it will become increasingly difficult to talk about things such as how fast a given machine learning model runs, or how fast a chip is, without considering how different both are from past efforts. Metrics in either case will be intimately bound up with the choices made by chip designers, computer designers, and AI scientists who build neural networks.

"MLPerf has not comprehended this kind of approach to the problem at all," said Bajic, referring to benchmark chip tests. "We sort of make it not quite BERT," he said. "that's something I would invite MLPerf people to think about."

see also

Artificial intelligence in the real world: What can it actually do?

What are the limits of AI? And how do you go from managing data points to injecting AI in the enterprise?

Read More

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All