In this podcast (http://www.radiofreehpc.com/audio/RF-HPC_Episodes/Episode186/RFHPC186gtc2.mp3), the Radio Free HPC team reviews the highlights of the GPU Technology Conference.From Rich's perspective, the key HPC announcement centered around new NVIDIA DGX-2 (https://www.nvidia.com/en-us/data-center/dgx-2/) supercomputer with the NVSwitch (https://www.nvidia.com/en-us/data-center/nvlink/) interconnect.The rapid growth in deep learning workloads has driven the need for a faster and more scalable interconnect, as PCIe bandwidth increasingly becomes the bottleneck at the multi-GPU system level. NVLink is a great advance to enable eight GPUs in a single server, and accelerate performance beyond PCIe. But taking deep learning performance to the next level will require a GPU fabric that enables more GPUs in a single server, and full-bandwidth connectivity between them.NVIDIA NVSwitch is the first on-node switch architecture to support 16 fully-connected GPUs in a single server node and drive simultaneous communication between all eight GPU pairs at an incredible 300 GB/s each. These 16 GPUs can be used as a single large-scale accelerator with 0.5 Terabytes of unified memory space and 2 petaFLOPS of deep learning compute power.”For more details on DGX-2, check out our insideHPC interview with NVIDIA's Marc Hamilton (https://insidehpc.com/2018/04/inside-new-nvidia-dgx-2-supercomputer-nvswitch/). Henry Newman and the Amazing Technicolor DreamcoatAfter that, we do our Catch of the Week.Download the MP3 (http://www.radiofreehpc.com/audio/RF-HPC_Episodes/Episode186/RFHPC186gtc2.mp3) * Subscribe on iTunes (http://bit.ly/WgEZzd) * RSS Feed (http://bit.ly/QXKy3V)(http://feeds.feedburner.com/~r/RadioFreeHpcPodcast/~4/aVzHNQ1H60c)