use case
Text classification
Web services rely on accurate text classification algorithms for applications ranging from comment moderation to customer service assistants. Profanity and hate speech detection, sentiment analysis for brand monitoring and customer assistance, and support ticket routing are just a few common examples.
These AI in social media services are powered by complex AI language models, which are slow to train. With Cerebras’ revolutionary WSE-2, these models can be trained in just hours on a single CS-2 system.
Customer Case Study
Search and Q&A
A team of GSK researchers introduced a BERT model that learns representations based on both DNA sequence and paired epigenetic state inputs, which they named Epigenomic BERT (EBERT).
Training this complex model with a previously prohibitively large dataset was made possible for the first time by the partnership between GSK and Cerebras, empowering the team to train the EBERT model in about 2.5 days, compared to an estimated 24 days using a GPU cluster with 16 nodes.
use case
Recommendation
Recommendation engines drive many digital businesses. For these engines to be accurate and fast, the AI models which power them need to be trained on massive text or graph datasets, then served with low latency and high throughput.
The CS-2’s 850,000-core processor with on-wafer interconnect enables high-speed large-model training and inference, which results in the delivery of better recommendations, faster.