
Sign up to save your podcasts
Or


This week Dr. Tim Scarfe, Sayak Paul and Yannic Kilcher speak with Dr. Simon Kornblith from Google Brain (Ph.D from MIT). Simon is trying to understand how neural nets do what they do. Simon was the second author on the seminal Google AI SimCLR paper. We also cover "Do Wide and Deep Networks learn the same things?", "Whats in a Loss function for Image Classification?", and "Big Self-supervised models are strong semi-supervised learners". Simon used to be a neuroscientist and also gives us the story of his unique journey into ML.
00:00:00 Show Teaser / or "short version"
00:18:34 Show intro
00:22:11 Relationship between neuroscience and machine learning
00:29:28 Similarity analysis and evolution of representations in Neural Networks
00:39:55 Expressability of NNs
00:42:33 Whats in a loss function for image classification
00:46:52 Loss function implications for transfer learning
00:50:44 SimCLR paper
01:00:19 Contrast SimCLR to BYOL
01:01:43 Data augmentation
01:06:35 Universality of image representations
01:09:25 Universality of augmentations
01:23:04 GPT-3
01:25:09 GANs for data augmentation??
01:26:50 Julia language
@skornblith
https://www.linkedin.com/in/simon-kornblith-54b2033a/
https://arxiv.org/abs/2010.15327
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
https://arxiv.org/abs/2010.16402
What's in a Loss Function for Image Classification?
https://arxiv.org/abs/2002.05709
A Simple Framework for Contrastive Learning of Visual Representations
https://arxiv.org/abs/2006.10029
Big Self-Supervised Models are Strong Semi-Supervised Learners
By Machine Learning Street Talk (MLST)4.7
8585 ratings
This week Dr. Tim Scarfe, Sayak Paul and Yannic Kilcher speak with Dr. Simon Kornblith from Google Brain (Ph.D from MIT). Simon is trying to understand how neural nets do what they do. Simon was the second author on the seminal Google AI SimCLR paper. We also cover "Do Wide and Deep Networks learn the same things?", "Whats in a Loss function for Image Classification?", and "Big Self-supervised models are strong semi-supervised learners". Simon used to be a neuroscientist and also gives us the story of his unique journey into ML.
00:00:00 Show Teaser / or "short version"
00:18:34 Show intro
00:22:11 Relationship between neuroscience and machine learning
00:29:28 Similarity analysis and evolution of representations in Neural Networks
00:39:55 Expressability of NNs
00:42:33 Whats in a loss function for image classification
00:46:52 Loss function implications for transfer learning
00:50:44 SimCLR paper
01:00:19 Contrast SimCLR to BYOL
01:01:43 Data augmentation
01:06:35 Universality of image representations
01:09:25 Universality of augmentations
01:23:04 GPT-3
01:25:09 GANs for data augmentation??
01:26:50 Julia language
@skornblith
https://www.linkedin.com/in/simon-kornblith-54b2033a/
https://arxiv.org/abs/2010.15327
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
https://arxiv.org/abs/2010.16402
What's in a Loss Function for Image Classification?
https://arxiv.org/abs/2002.05709
A Simple Framework for Contrastive Learning of Visual Representations
https://arxiv.org/abs/2006.10029
Big Self-Supervised Models are Strong Semi-Supervised Learners

478 Listeners

432 Listeners

302 Listeners

212 Listeners

196 Listeners

305 Listeners

70 Listeners

131 Listeners

49 Listeners

95 Listeners

209 Listeners

591 Listeners

34 Listeners

22 Listeners

39 Listeners