HAMBURG, GERMANY – May 31, 2022 – At ISC 2022, Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, shared many supercomputing partners including European Parallel Computing Center (EPCC), Leibniz Supercomputing Centre (LRZ), Lawrence Livermore National Laboratory, Argonne National Laboratory (ANL), the National Center for Supercomputing Applications (NCSA), and the Pittsburgh Supercomputing Center (PSC).
“At Cerebras Systems, our goal is to revolutionize compute,” said Andrew Feldman, CEO and co-founder of Cerebras Systems. “It’s thrilling to see some of the most respected supercomputing centers around the world deploying our CS-2 system to accelerate their AI workloads and achieving incredible scientific breakthroughs in climate research, precision medicine, computational fluid dynamics and more.”
In Europe, EPCC in the UK and LRZ in Germany announced CS-2 deployments to accelerate scientific research in their regions. EPCC has been a customer since 2021, and recently agreed to upgrade their CS-1 to a CS-2, to greatly accelerate natural language processing (NLP) for genomics as part of its public health initiatives establishing Edinburgh as an AI center of innovation in the region.
“EPCC has had a great experience with its CS-1 and is thrilled to be upgrading to a CS-2. The support and engagement we’ve had from Cerebras has been fantastic, and we look forward to even more success with our new system,” said Professor Mark Parsons, EPCC Director.
LRZ is set to accelerate innovation and scientific discovery in Germany with the CS-2 in its forthcoming AI supercomputer. Coming online this summer, the new supercomputer will enable Germany’s researchers to bolster scientific research and innovation with AI. Initial work will focus on medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognosis, and computational fluid dynamics (CFD) to advance understanding in areas such as aerospace engineering and manufacturing.
“Currently, we observe that AI compute demand is doubling every three to four months with our users. With the high integration of processors, memory and on-board networks on a single chip, Cerebras enables high performance and speed. This promises significantly more efficiency in data processing and thus faster breakthrough of scientific findings,” said Prof. Dr. Dieter Kranzlmüller, Director of the LRZ.
Continuing this trend, NCSA’s new HOLL-I supercomputer for extreme-scale machine learning is powered by a CS-2 system. HOLL-I is unique in NCSA’s supercomputing portfolio in that it is built to handle machine learning jobs at intense speeds that will cut down on compute time and lower overall costs, while obtaining exceptional performance results. Initial AI applications are focused on NCSA’s Industry Partners group, however HOLL-I will be broadly available for use on extreme scale AI projects, at-cost.”
“We’re thrilled to have the Cerebras CS-2 system up and running in our HOLL-I supercomputer,” said Vlad Kindratenko, director of the Center for Artificial Intelligence Innovation at NCSA.
Argonne National Laboratory’s Leadership Computing Facility recently upgraded from a single CS-1 system to two CS-2 systems. As part of its AI Testbed, the CS-2 systems are enabling researchers to explore next-generation machine learning applications and workloads to advance
the use of AI for science. Prior work done on the Cerebras CS-1 and CS-2 was nominated for the Gordon Bell Special Prize for HPC-Based COVID-19 Research at SC21.
PSC also doubled their AI capacity to 1.7 million AI cores, with two CS-2 systems, powering the center’s Neocortex supercomputer for high-performance AI. Thanks to the CS-2 systems, Neocortex allows researchers to train larger deep learning models with larger datasets, while scaling model parallelism to unprecedented levels.
“With two Cerebras CS-2 systems, we look forward to the breakthroughs that the now even greater capabilities of Neocortex will enable,” said Paola Buitrago, principal investigator of Neocortex and Director, Artificial Intelligence & Big Data at PSC. “We will continue working with the research community to help them take advantage of this technology that is orders of magnitude more powerful.”
The Cerebras CS-2 is fastest AI system in existence and it is powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2), which is 56 times larger than the nearest competitor. As a result, the CS-2 delivers more AI-optimized compute cores, more fast memory, and more fabric bandwidth than any other deep learning processor in existence. It was purpose built to accelerate deep learning workloads reducing the time to answer by orders of magnitude.
With customers and partners in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI solutions to a growing roster of customers in the enterprise, government, and high performance computing segments including GlaxoSmithKline, TotalEnergies, Leibniz Supercomputing Centre, National Center for Supercomputing Applications, nference, Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center, Edinburgh Parallel Computing Centre (EPCC), National Energy Technology Laboratory, and Tokyo Electron Devices.
For more information about the Cerebras CS-2 system for scientific computing, please visit: https://cerebras.ai/industry-scientific-computing/
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer system, designed for the singular purpose of accelerating AI and changing the future of AI work forever. Our flagship product, the CS-2 system is powered by the world’s largest processor – the 850,000 core Cerebras WSE-2, enables customers to accelerate their deep learning work by orders of magnitude over graphics processing units.
Related Posts
August 27, 2024
Cerebras Launches the World’s Fastest AI Inference
20X performance and 1/5th the price of GPUs- available today. Developers can…