
Sign up to save your podcasts
Or
In this episode, we discuss Cerebras Systems and their Wafer-Scale Engine, currently the fastest LLM inference processor, with 7000x more memory bandwidth than nVidia H100. Together with G42, they’re also developing the Condor Galaxy, potentially the largest AI supercomputer. Is this all just hype? What are the real world use cases and why should average users care about it?
In this episode, we discuss Cerebras Systems and their Wafer-Scale Engine, currently the fastest LLM inference processor, with 7000x more memory bandwidth than nVidia H100. Together with G42, they’re also developing the Condor Galaxy, potentially the largest AI supercomputer. Is this all just hype? What are the real world use cases and why should average users care about it?