Watch keynotes, panel discussions, fireside chats from our historic event
Opening Keynote
Andrew Feldman, CEO
Hardware Keynote
Sean Lie, CTO
ML Models and Product
Jessica Liu, VP of Product
Neural Magic Keynote
AI Applications and Research Panel Discussion
Qualcomm and Cerebras Fireside Chat
G42 and Cerebras Fireside Chat
📣 Announcing the Fastest AI Chip on Earth 📣
Cerebras proudly announces CS-3: the fastest AI accelerator in the world.
The CS-3 can train up to 24 trillion parameter models on a single device. The world has never seen AI at this scale.
CS-3 specs:
⚙ 46,225 mm2 silicon | 4 trillion transistors | 5nm
⚙ 900,000 cores optimized for sparse linear algebra
⚙ 125 petaflops of AI compute
⚙ 44 gigabytes of on-chip memory
⚙ 1,200 terabytes of external memory
⚙ 21 PByte/s memory bandwidth
⚙ 214 Pbit/s fabric bandwidth
📰 Press Release
👨🎓 Learn More
Cerebras + Qualcomm to deliver unprecedented performance in AI inference
📈 Up to a 10x performance improvement for large-scale generative AI inference when using Cerebras CS-3 for training and Qualcomm® Cloud AI 100 Ultra for inference.
📉 Radically lower inference costs through cutting-edge ML techniques like unstructured sparsity, speculative decoding, efficient MX6 inference, and Network Architecture Search (NAS).
Combining these and other advanced techniques enables inference-aware training on Cerebras and inference-ready models that can be deployed on Qualcomm cloud instances anywhere.
📣 Announcing Condor Galaxy 3 📣
Cerebras and G42 announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy.
CG-3 will be built using 64 Cerebras CS-3 systems and is designed and delivered in the United States of America. CG-1 and CG-2 are located in California. CG-3 will be located in Dallas Texas.
CG-3 specs:
🚀8 exaFLOPs of AI Compute
🚀64 CS-3 systems
🚀58 million AI-optimized cores
With CG-3, you can train Llama-70B in days, not months.
📰 Press Release
👨🎓 Learn More