event
Hot Chips 34
Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning
speaker
Sean Lie
Sean Lie, Co-Founder and Chief Hardware Architect
Cerebras Session Title: Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning
blog
Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning
Our ML-optimized architecture enables the largest models to run on a single device. With data parallel-only scale out and native unstructured sparsity acceleration, Cerebras is making large models available to everyone. (Talk from Hot Chips 34.)
Whitepaper
Deep Learning Programming at Scale
Deep learning has become one of the most important computational workloads of our generation, but it is profoundly computationally intensive. Today, large neural networks are often trained using large clusters of graphics processing units (GPUs). These clusters are expensive, complicated to program, and can take weeks to train a network.
Video
Thinking Outside the Die, Part 1: The Grand Challenge
We started Cerebras with a vision to drastically change the landscape of compute for AI. In this five-part series, Sean Lie, Co-Founder and Chief Hardware Architect at Cerebras Systems shares some of the “outside the die” thinking we believe is necessary to meet the demands of ML in the future.