Skip to main content

Products

Solutions

Customers

Developers

Company

Videos

March 24, 2025

Andrew Feldman, Cerebras Co-Founder and CEO: The AI Chip Wars & The Plan to Break Nvidia's Dominance

March 21, 2025

How Cerebras Solved the Yield Problem, explained by CTO Sean Lie

February 24, 2025

Inside the AI chip race

Andrew Feldman co-founder and CEO of Cerebras Systems, provided an in-depth exploration of the groundbreaking technology powering the world’s fastest processor for large language model inference. Discover what’s next for this innovative Nvidia challenger and how it’s shaping the future of AI computing.

February 14, 2025

Julie Choi, Cerebras | theCUBE + NYSE Wired: CMO Leaders Summit

Julie Choi, CMO at Cerebras, talks with John Furrier at the NYSE Wired CMO Leaders Summit in theCUBE Studios in Palo Alto, CA.

February 12, 2025

AI Chip Startup Cerebras Systems Looks To Challenge Nvidia's Dominance

Forbes Assistant Managing Editor Katharine Schwab talks with Cerebras Systems' CEO and cofounder Andrew Feldman about his startup's AI chip, the impact of China's DeepSeek and its implications for the global AI landscape.

December 21, 2024

AI Native 2024 – Cerebras CTO Sean Lie – Session #4

Next Generation AI Enabled by GPU Impossible Performance – Sean Lie (Cerebras)

October 24, 2024

Andy Hock's Talk at AI Infra Summit

Andy Hock from Cerebra Systems presented at the AI Infra Summit, focusing on revolutionizing AI compute through wafer-scale engines. He emphasized the growing demand for AI infrastructure, explaining how traditional chips are insufficient for the massive computational needs of today’s AI models, which have increased by 40,000 times over the last five years. Cerebra Systems has developed the wafer-scale engine, the world’s largest computer chip, designed specifically for AI workloads.

September 19, 2024

Can We Make Generative AI Cheaper? | Natalia Vassilieva & Andy Hock, Cerebras Systems

With AI tools constantly evolving, the potential for innovation seems limitless. But with great potential comes significant costs, and the question of efficiency and scalability becomes crucial. How can you ensure that your AI models are not only pushing boundaries but also delivering results in a cost-effective way? What strategies can help reduce the financial burden of training and deploying models, while still driving meaningful business outcomes?

May 07, 2024

Training the largest LLMs, Cerebras Wafer-Scale Architecture | Keynote 3 | Jean-Philippe Fricker

Experience the pinnacle of AI and machine learning expertise at the Applied Machine Learning Days (AMLD) hosted at EPFL in 2024. With over 450 speakers, 43 tracks, and 28 workshops, these recordings offer insights into the latest research and applications of AI and machine learning. The AMLD channel is your gateway to this premier four-day gathering for the AI and machine learning community, drawing thousands of experts and participants from over 40 countries across industry, academia, and government.

Schedule a meeting to discuss your AI vision and strategy.