cerebras inference

research grant

We believe AI is the most transformative technology of our generation.
Collaborate with us to push the boundaries of AI research.

APPLY TODAYREAD MORE

Our Mission

We want to accelerate AI by making it faster, easier to use, and more energy efficient, making AI accessible around the world.

Cerebras inference, powered by the third generation Wafer Scale Engine, is the fastest AI inference solution in the world. Cerebras inference delivers over 2,100 tokens per second for Llama-3.1-70b, 16x faster than the fastest GPU solution.

We invite university faculty and researchers to respond to this Request for Proposals (RFP) to advance the field of Generative AI and contribute to the broader scientific and technological communities.

Grant Details

Selected Principal Investigators (PIs) may receive the following:

      • Cerebras Inference service credits, equivalent to no more than $50,000 USD of inference credits
      • Cerebras tutorials and hands-on sessions with Cerebras engineers and researchers

The final grant amount and details will be determined by Cerebras.

Areas of Interest

Areas of research may include: 

    • Prompting techniques such as chain-of-thought, ReACT, programmable prompt customization, etc. 
    • In-Context Learning (ICL).
    • Inference decoding algorithms such as sampling, chain-of-thought decoding, contrastive decoding, etc. 
    • Meta-generation algorithms that incorporate the language model within larger generation programs, such as best-of-N, majority voting, self-consistency, tool-calling, etc. 
    • Leveraging additional knowledge base through a combination of iterative multi-step retrieval, reasoning, and generation. 

Areas of research may include: 

    • Planning content structure, content style, analogies etc. 
    • Iterative writing, reflection, and refinement. 
    • Editing document based on user guidance to collaboratively produce the content. 

Areas of research may include: 

    • Development of robust benchmarks and metrics to assess model quality, reliability, safety, and bias.  
    • Novel approaches to operate generative models responsibly, focusing on measuring, preventing, and mitigating hallucinations, biases, and other harmful or undesirable outputs. 

Areas of research may include: 

    • Novel methods to generate large-scale synthetic datasets with precisely defined properties using generative models. 
    • Filtering techniques to ensure quality and diversity of the generated sample

Experience the Speed of

Cerebras Inference

cerebras inference

research grant