The International Conference for High Performance Computing, Networking, Storage, and Analysis

Booth #2703

November 17 – 22 Atlanta, GA

000 days 00 hours 00 minutes 00 seconds

Where to find us?

Cerebras Booth 2703 is located near the entrances of Hall B.
Cerebras will also be hosting private meetings at Exhibitor Suite 1.
Contact us if you’d like to meet! www.cerebras.ai/contact-us

Interactive Map

Come visit our booth

Interact with Cerebras Inference: We will have interactive workstations for you to experience the world’s fastest inference through audio, video, and chat

See our gear: We will have on display our wafer-scale engine, our engine block, and our systems. Come by and snap a picture!

Meet our people: Cerebras leadership, engineers, and more will be on hand to meet you and answer any of your questions. This includes the authors of our Gordon Bell Finalist nominated research

Speaker

Programming Novel AI Accelerators for Scientific Computing

Leighton Wilson

Session time: Nov 17, 2024
Time: 8:30am – 5pm

Location: B201

publication

Cerebras a Finalist for the 2024 ACM Gordon Bell Prize

READ THE PAPER

The ACM Gordon Bell Prize recognizes outstanding achievement in high performance computing. The purpose of the award is to track the progress over time of parallel computing, with particular emphasis on rewarding innovation in applying high performance computing to applications in science, engineering, and large-scale data analytics. 

Cerebras is a finalist for our collaborative work: Breaking the Molecular Dynamics Timescale Barrier Using a Wafer-Scale System 

This team has created an Embedded Atom Method (EAM)-based molecular dynamics code that exploits the ultra-fast communication and high memory bandwidth afforded by the 850,000 core-Cerebras Wafer-Scale Engine. It attains perfect weak scaling across the full system for grain boundary problems involving copper, tungsten and tantalum atoms, and can extend to multiple wafers. For problems up to 800,000 atoms, it calculates significantly more timesteps per second than EAM in LAMMPS on Quartz and Frontier, directly benefiting the modeling of phenomena that emerge at long timescales. 

AUTHORS 

Kylee Santos, Stan Moore, Tomas Oppelstrup, Amirali Sharifian, Ilya Sharapov, Aidan Thompson, Delyan Z. Kalchev, Danny Perez, Robert Schreiber, Scott Pakin, Edgar A. Leon, James H. Laros III, Michael James, Sivasankaran Rajamanickam 

AFFILIATIONS 

Cerebras Systems, Sandia National Laboratories, Lawrence Livermore National Laboratory, Los Alamos National Laboratory 

Cerebras and partners won the Gordon Bell Special Prize for COVID research in 2022
Blog

Introducing Sparse Llama: 70% Smaller, 3x Faster, Full Accuracy

Cerebras and Neural Magic have achieved a major milestone in the field of large language models (LLMs). By combining state-of-the-art pruning techniques, sparse pretraining, and purpose-built hardware, we have unlocked unprecedented levels of sparsity in LLMs, enabling up to 70% parameter reduction without compromising accuracy.

Close
READ
Blog

Cerebras Breaks Exascale Record for Molecular Dynamics Simulations

Cerebras has set a new record for molecular dynamics simulation speed that goes far beyond the exascale level. While this breakthrough has wide-ranging impacts for materials modeling, we initially focused on a problem relevant to commercializing nuclear fusion. This achievement demonstrates how Cerebras's wafer-scale computers enable novel computational science applications.

Close
READ
Blog

Cerebras CS-3 vs. Nvidia B200: 2024 AI Accelerators Compared

In the fast-paced world of AI hardware, the Cerebras CS-3 and Nvidia DGX B200 are two of the most exciting new offerings to hit the market in 2024. Both systems are designed to tackle large scale AI training, but they take decidedly different approaches.

Close
READ