Cerebras
Inference

The world’s fastest inference -
70x faster than GPU clouds,
128K context, 16-bit precision.

TRY CHATLEARN MORE

Genomic Foundation Model

A revolutionary model designed to improve
diagnostics and personalize treatment selection.

Latest Announcements

Cerebras Systems and Mayo Clinic Unveil Best in Class Genomic Foundation Model

ROCHESTER, Minn., and SUNNYVALE, Calif. — January 14, 2025 — Cerebras Systems, in collaboration with Mayo Clinic, announced significant progress in developing artificial intelligence tools to advance patient care, today at the JP Morgan Healthcare Conference in San Francisco.  Together, Cerebras and Mayo Clinic have developed a world-class genomic foundation model designed to support physicians and patients. 

Close

Cerebras Demonstrates Trillion Parameter Model Training on a Single CS-3 System

SUNNYVALE, CA AND VANCOUVER — December 10, 2024 – Today at NeurIPS 2024, Cerebras Systems, the pioneer in accelerating generative AI, today announced a groundbreaking achievement in collaboration with Sandia National Laboratories: successfully demonstrating training of a 1 trillion parameter AI model on a single CS-3 system. Trillion parameter models represent the state of the art in today’s LLMs, requiring thousands of GPUs and dozens of hardware experts to perform. By leveraging Cerebras’ Wafer Scale Cluster technology, researchers at Sandia were able to initiate training on a single AI accelerator – a one-of-a-kind achievement for frontier model development.

Close

Cerebras Delivers Record-Breaking Performance with Meta's Llama 3.1-405B Model

Llama 3.1 405B now runs at 969 tokens/s on Cerebras Inference: Frontier AI now runs at instant speed. Last week we ran a customer workload on Llama 3.1 405B at 969 tokens/s – a new record for Meta’s frontier model. Llama 3.1 405B on Cerebras is by far the fastest frontier model in the world – 12x faster than GPT-4o and 18x faster than Claude 3.5 Sonnet. In addition, we achieved the highest performance at 128K context length and shortest time-to-first-token latency, as measured by Artificial Analysis.

Close

Award Winning Technology

Cerebras continues to be recognized for pushing the boundaries of AI

TIME

TIME

FORBES

FORTUNE

ai model services

You bring the data, we'll train the model

Whether you want to build a multi-lingual chatbot or predict DNA sequences, our team of AI scientists and engineers will work with you and your data to build state-of-the-art models leveraging the latest AI techniques.

FIND OUT MORE

high performance computing

The fastest HPC
accelerator on earth

With 900,000 cores and 44 GB of on-chip memory, the CS-3 completely redefines the performance envelope of HPC systems. From Monte Carlo Particle Transport to Seismic Processing, the CS-3 routinely outperforms entire supercomputing installations.

FIND OUT MORE

Models on Cerebras

The Cerebras platform has trained a huge assortment of models from multi-lingual LLMs to healthcare chatbots. We help customers train their own foundation models or fine-tune open source models like Llama 2. Best of all, the majority of our work is open source.

llama 3.3

Foundation language model
8B, 70B, 405B, 15T tokens
128K context

MED42

Medical Q&A LLM
Fine-tuned from Llama2-70B
Scores 72% on USMLE

Mistral

7B Foundation Model

JAIS

Bilingual Arabic + English model
13B, 30B Parameters
Available on Azure, G42 Cloud

OPEN SOURCE
TRAINED ON CEREBRAS

starcoder

Coding LLM
15.5B parameters, 1T tokens
8K context

OPEN WEIGHTS
TRAINED ON CEREBRAS

diffusion transformer

Image generation model
33M-2B parameters
Adaptive layer norm

FALCON

Foundation language model
40B, 1T tokens,
(Uses Flash Attention and Multiquery)

T5

For NLP applications
Encoder-decoder model
60M-11B parameters

CEREBRAS-GPT

Foundational Language Model
100m - 13b parameters
NLP

OPEN SOURCE
TRAINED ON CEREBRAS

BTLM-chat

BTLM-3B-8K fine-tuned for chat
3B parameters, 8K context
Direct Preference Optimization

gigaGPT

Implements nanoGPT on Cerebras
Trains 175B+ models
565 lines of code

CRYSTALCODER

Trained for English + Code
7B Parameters, 1.3T Tokens
LLM360 Release

OPEN SOURCE
TRAINED ON CEREBRAS

Latest Blog Posts

MORE ON OUR BLOG