
Sign up to save your podcasts
Or


In episode 24 of The Gradient Podcast, Daniel Bashir talks to Greg Yang, senior researcher at Microsoft Research. Greg Yang’s Tensor Programs framework recently received attention for its role in the µTransfer paradigm for tuning the hyperparameters of large neural networks.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro(01:50) Start in AI / Research(05:55) Fear of Math in ML(08:00) Presentation of Research(17:35) Path to MSR(21:20) Origin of Tensor Programs(26:05) Refining TP’s Presentation(39:55) The Sea of Garbage (Initializations) and the Oasis(47:44) Scaling Up Further(55:53) On Theory and Practice in Deep Learning(01:05:28) Outro
Episode Links:
* Greg’s Homepage
* Greg’s Twitter
* µP GitHub
* Visual Intro to Gaussian Processes (Distill)
By Daniel Bashir4.7
4747 ratings
In episode 24 of The Gradient Podcast, Daniel Bashir talks to Greg Yang, senior researcher at Microsoft Research. Greg Yang’s Tensor Programs framework recently received attention for its role in the µTransfer paradigm for tuning the hyperparameters of large neural networks.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Sections:
(00:00) Intro(01:50) Start in AI / Research(05:55) Fear of Math in ML(08:00) Presentation of Research(17:35) Path to MSR(21:20) Origin of Tensor Programs(26:05) Refining TP’s Presentation(39:55) The Sea of Garbage (Initializations) and the Oasis(47:44) Scaling Up Further(55:53) On Theory and Practice in Deep Learning(01:05:28) Outro
Episode Links:
* Greg’s Homepage
* Greg’s Twitter
* µP GitHub
* Visual Intro to Gaussian Processes (Distill)

229,169 Listeners

1,089 Listeners

334 Listeners

4,182 Listeners

211 Listeners

6,095 Listeners

9,927 Listeners

511 Listeners

5,512 Listeners

15,272 Listeners

29,246 Listeners

10 Listeners

25 Listeners