PyTorch Developer Podcast

Half precision


Listen Later

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  • The Wikipedia article on IEEE floating point is pretty great https://en.wikipedia.org/wiki/IEEE_754
  • How bfloat16 works out when doing training https://arxiv.org/abs/1905.12322
  • Definition of acc_type in PyTorch https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/AccumulateType.h
...more
View all episodesView all episodes
Download on the App Store

PyTorch Developer PodcastBy Edward Yang, Team PyTorch

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

47 ratings


More shows like PyTorch Developer Podcast

View all
Talk Python To Me by Michael Kennedy

Talk Python To Me

592 Listeners

Python Bytes by Michael Kennedy and Brian Okken

Python Bytes

213 Listeners

The Product Podcast by Product School

The Product Podcast

167 Listeners

NerdWallet's Smart Money Podcast by NerdWallet Personal Finance

NerdWallet's Smart Money Podcast

750 Listeners

Darknet Diaries by Jack Rhysider

Darknet Diaries

7,864 Listeners

Naval by Naval

Naval

2,096 Listeners