
Sign up to save your podcasts
Or


Support us! https://www.patreon.com/mlst
Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teaching Algorithmic Reasoning via In-context Learning’. In this work, Hattie identifies and examines four key stages for successfully teaching algorithmic reasoning to large language models (LLMs): formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through the application of algorithmic prompting, Hattie has achieved remarkable results, with an order of magnitude error reduction on some tasks compared to the best available baselines. This breakthrough demonstrates algorithmic prompting’s viability as an approach for teaching algorithmic reasoning to LLMs, and may have implications for other tasks requiring similar reasoning capabilities.
TOC
[00:00:00] Hattie Zhou
[00:19:49] Markus Rabe [Google Brain]
Hattie's Twitter - https://twitter.com/oh_that_hat
Website - http://hattiezhou.com/
Teaching Algorithmic Reasoning via In-context Learning [Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi]
https://arxiv.org/pdf/2211.09066.pdf
Markus Rabe [Google Brain]:
https://twitter.com/markusnrabe
https://research.google/people/106335/
https://www.linkedin.com/in/markusnrabe
Autoformalization with Large Language Models [Albert Jiang Charles Edgar Staats Christian Szegedy Markus Rabe Mateja Jamnik Wenda Li Yuhuai Tony Wu]
https://research.google/pubs/pub51691/
Discord: https://discord.gg/aNPkGUQtc5
YT: https://youtu.be/80i6D2TJdQ4
By Machine Learning Street Talk (MLST)4.7
8585 ratings
Support us! https://www.patreon.com/mlst
Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teaching Algorithmic Reasoning via In-context Learning’. In this work, Hattie identifies and examines four key stages for successfully teaching algorithmic reasoning to large language models (LLMs): formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through the application of algorithmic prompting, Hattie has achieved remarkable results, with an order of magnitude error reduction on some tasks compared to the best available baselines. This breakthrough demonstrates algorithmic prompting’s viability as an approach for teaching algorithmic reasoning to LLMs, and may have implications for other tasks requiring similar reasoning capabilities.
TOC
[00:00:00] Hattie Zhou
[00:19:49] Markus Rabe [Google Brain]
Hattie's Twitter - https://twitter.com/oh_that_hat
Website - http://hattiezhou.com/
Teaching Algorithmic Reasoning via In-context Learning [Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi]
https://arxiv.org/pdf/2211.09066.pdf
Markus Rabe [Google Brain]:
https://twitter.com/markusnrabe
https://research.google/people/106335/
https://www.linkedin.com/in/markusnrabe
Autoformalization with Large Language Models [Albert Jiang Charles Edgar Staats Christian Szegedy Markus Rabe Mateja Jamnik Wenda Li Yuhuai Tony Wu]
https://research.google/pubs/pub51691/
Discord: https://discord.gg/aNPkGUQtc5
YT: https://youtu.be/80i6D2TJdQ4

478 Listeners

432 Listeners

302 Listeners

212 Listeners

196 Listeners

305 Listeners

70 Listeners

131 Listeners

49 Listeners

95 Listeners

209 Listeners

585 Listeners

34 Listeners

22 Listeners

39 Listeners