LOS ALTOS, Calif.–(BUSINESS WIRE)–Cerebras Systems, a company dedicated to accelerating artificial intelligence (AI) compute, today announced a partnership with the U.S. Department of Energy (DOE) to advance the massive deep learning experiments being pursued at its laboratories for basic and applied science and medicine with supercomputer-scale AI. Argonne National Laboratory and Lawrence Livermore National Laboratory are the first labs announced in Cerebras’ multi-year, multi-laboratory partnership, with more to follow in the coming months. The partnership comes on the heels of Cerebras’ introduction of the Wafer Scale Engine (WSE), the largest chip ever built, last month.

“The stand-up of DOE’s new Artificial Intelligence and Technology Office underscores the broad importance of AI to all of our mission, business and operational functions,” said Dr. Dimitri Kusnezov, DOE’s Deputy Under Secretary for Artificial Intelligence & Technology. “We are excited to partner with innovative companies like Cerebras Systems to push the frontiers of AI. The strategic deployment of high-performance AI systems with next-generation innovative technologies like Cerebras’ Wafer Scale Engine to build and defend national competitive advantage is very much at the heart of the Secretary of Energy, Rick Perry’s vision and in line with President Trump’s executive order on AI dated February 11, 2019.”

“We are honored and proud to partner with the Department of Energy and the talented researchers at Argonne National Laboratory and Lawrence Livermore National Laboratory,” said Andrew Feldman, co-founder and CEO of Cerebras Systems. “Together we aim to push the boundaries of AI technologies by combining DOE’s unmatched computing capabilities with the largest and highest performing AI processor ever built – the Cerebras WSE. In partnership, we aim to gain traction on a diverse set of grand challenges that will touch virtually everything we do.”

In August, Cerebras Systems announced the WSE, a single chip that contains more than 1.2 trillion transistors. With an area of 46,225 square millimeters, the WSE is the largest chip in the world and enables AI at supercompute scale. The WSE is 56.7 times larger than the largest graphics processing unit which measures only 815 square millimeters and contains only 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. The massive size and resources available with Cerebras’ WSE make it an ideal instrument to accelerate the Department of Energy’s numerous deep learning experiments across its mission, business and operations, including basic and applied science and medicine.

“The opportunity to incorporate the largest and fastest AI chip ever—the Cerebras WSE—into our advanced computing infrastructure will enable us to dramatically accelerate our deep learning research in science, engineering and health,” said Rick Stevens, head of computing at Argonne National Laboratory. “It will allow us to invent and test more algorithms, to more rapidly explore ideas, and to more quickly identify opportunities for scientific progress.”

“Integrating Cerebras technology into the Lawrence Livermore National Laboratory supercompute infrastructure will enable us to build a truly unique compute pipeline with massive computation, storage, and thanks to the Wafer Scale Engine dedicated AI processing,” said Bronis R. de Supinski, CTO of Livermore Computing at LLNL. “This unique opportunity for public-private partnership with a cutting-edge AI partner will help us meet our mission and push the boundaries of managing the increasingly complex and large data sets from which we have to make decisions.”

When it comes to AI compute, bigger is better. Big chips process information more quickly, producing answers in less time than do clusters of small chips. By accelerating all the components of AI training, the Cerebras WSE trains models faster than alternative approaches. Unlike graphics processors, which are designed primarily for graphics processing, the WSE is designed from the ground up for AI work. It contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging.

“Cerebras’s ability to partner with leading supercomputer sites indicates the performance potential of the Wafer Scale Engine,” said Linley Gwennap, principal analyst at The Linley Group. “The processor’s tight coupling of compute resources and memory on a massive scale, enabled by the startup’s innovative engineering solutions, make it uniquely suited to solving supercomputer-caliber problems.”

For more information on Cerebras Systems and the Cerebras WSE, please visit cerebras.ai. Imagery and digital photography for the Cerebras WSE can be found linked here.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The first announced element of the Cerebras solution is the Wafer Scale Engine (WSE). The WSE is the largest chip ever built. It contains 1.2 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 21.1 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can train in minutes on the Cerebras WSE.

__________________________________________
1
 https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf

Contacts

Press contact (for media only)
Kim Ziesemer
Email: pr@zmcommunications.com