Cerebras Systems today announced what it said is record-breaking performance for DeepSeek-R1-Distill-Llama-70B inference, ...
B AI model on its wafer-scale processor, delivering 57x faster speeds than GPU solutions and challenging Nvidia's AI chip ...
The DeepSeek-R1-Distill-Llama-70B model is available immediately through Cerebras Inference, with API access available to select customers through a developer preview program. For more information ...