Skip to main content

Cerebras is now trading on Nasdaq >>

The Future of AI is Wafer Scale

Four trillion transistors. 125 petaflops. One silicon wafer. The world’s largest and most powerful processor for AI training and inference.

The WSE-3 is the largest AI chip ever built, measuring 46,225 mm² and containing 4 trillion transistors. It delivers 125 petaflops of AI compute through 900,000 AI-optimized cores — 19× more transistors and 28× more compute than the NVIDIA B200.

How Cerebras Solved the Yield Problem

Think a silicon wafer-sized chip can’t be economical? Think again. Cerebras made wafer-scale work by designing to withstand defects, not avoid them—with redundant compute cores, redundant routing, and a fail-in-place architecture that shuts flaws down and routes around them.​

Schedule a meeting to discuss your AI vision and strategy.

Performance comparisons are based on third-party benchmarking or internal testing. Observed inference speed improvements versus GPU-based systems may vary depending on workload, configuration, date and models being tested.

1237 E. Arques Ave
 Sunnyvale, CA 94085

© 2026 Cerebras.
All rights reserved.