DecodeFM

Cerebras


Listen Later

In this episode, we discuss Cerebras Systems and their Wafer-Scale Engine, currently the fastest LLM inference processor, with 7000x more memory bandwidth than nVidia H100. Together with G42, they’re also developing the Condor Galaxy, potentially the largest AI supercomputer. Is this all just hype? What are the real world use cases and why should average users care about it? 

...more
View all episodesView all episodes
Download on the App Store

DecodeFMBy Particle Future