
Sign up to save your podcasts
Or
We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST
Patreon: https://www.patreon.com/mlst
For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them.
Panel:
Dr. Tim Scarfe
Dr. Keith Duggar
Dr. Yannic Kilcher
Our special thanks to:
- Alexander Mattick (Zickzack)
References:
Yoshua Bengio @ MILA (https://mila.quebec/en/person/bengio-yoshua/)
GFlowNet Foundations (https://arxiv.org/pdf/2111.09266.pdf)
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation (https://arxiv.org/pdf/2106.04399.pdf)
Interpolation Consistency Training for Semi-Supervised Learning (https://arxiv.org/pdf/1903.03825.pdf)
Towards Causal Representation Learning (https://arxiv.org/pdf/2102.11107.pdf)
Causal inference using invariant prediction: identification and confidence intervals (https://arxiv.org/pdf/1501.01332.pdf)
4.7
8484 ratings
We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST
Patreon: https://www.patreon.com/mlst
For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them.
Panel:
Dr. Tim Scarfe
Dr. Keith Duggar
Dr. Yannic Kilcher
Our special thanks to:
- Alexander Mattick (Zickzack)
References:
Yoshua Bengio @ MILA (https://mila.quebec/en/person/bengio-yoshua/)
GFlowNet Foundations (https://arxiv.org/pdf/2111.09266.pdf)
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation (https://arxiv.org/pdf/2106.04399.pdf)
Interpolation Consistency Training for Semi-Supervised Learning (https://arxiv.org/pdf/1903.03825.pdf)
Towards Causal Representation Learning (https://arxiv.org/pdf/2102.11107.pdf)
Causal inference using invariant prediction: identification and confidence intervals (https://arxiv.org/pdf/1501.01332.pdf)
481 Listeners
441 Listeners
298 Listeners
323 Listeners
764 Listeners
190 Listeners
87 Listeners
199 Listeners
371 Listeners
122 Listeners
199 Listeners
39 Listeners
76 Listeners
441 Listeners
36 Listeners