Cerebras Systems unveils a record 1.2 trillion transistor chip for AI

Link post

From reddit comments:

....how scared should we be?

Gwern:

No idea. They don’t provide benchmarks, and while they promise they are forthcoming, it sounds like it might be months. In the meantime, there’s just vague hype about going from ‘months to minutes’. Hardware startups have a long history of overpromising and underdelivering: it’s hard to beat Nvidia & Moore’s Law (remember all those ‘analogue computing’ startups?). It sure does sound interesting, though: 18GB of on-chip SRAM rather than HBM or DDR* RAM? 1.2 trillion transistors? Potentially FPGA-style streaming of data points through a single on-chip model with each layer being handled by different sets of cores? Sparsity multipliers? All quite interesting sounding and I will be very interested in the benchmarks, whenever they should be forthcoming. If nothing else, it is an extreme architecture of a type you rarely see.