
Sign up to save your podcasts
Or


In episode 19 of The Gradient Podcast, we talk to Rishi Bommasani, a Ph.D student at Stanford focused on Foundation Models.
Rish is a second-year Ph.D. student in the CS Department at Stanford, where he is advised by Percy Liang and Dan Jurafsky. His research focuses on understanding AI systems and their social impact, as well as using NLP to further scientific inquiry. Over the past year, he helped build and organize the Stanford Center for Research on Foundation Models (CRFM).
Sections:
(00:00:00) Intro(00:01:05) How did you get into AI?(00:09:55) Towards Understanding Position Embeddings(00:14:23) Long-Distance Dependencies don’t have to be Long(00:18:55) Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings(00:30:25) Masters Thesis(00:34:05) Start of PhD and work on foundation models(00:42:14) Why were people intested in foundation models(00:46:45) Formation of CRFM(00:51:25) Writing report on foundation models(00:56:33) Challenges in writing report(01:05:45) Response to reception(01:15:35) Goals of CRFM(01:25:43) Current research focus(01:30:35) Interests outside of research(01:33:10) Outro
Papers discussed:
* Towards Understanding Position Embeddings
* Long-Distance Dependencies don’t have to be Long: Simplifying through Provably (Approximately) Optimal Permutations
* Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings
* Generalized Optimal Linear Orders
* On the Opportunities and Risks of Foundation Models
* Reflections on Foundation Models
By Daniel Bashir4.7
4747 ratings
In episode 19 of The Gradient Podcast, we talk to Rishi Bommasani, a Ph.D student at Stanford focused on Foundation Models.
Rish is a second-year Ph.D. student in the CS Department at Stanford, where he is advised by Percy Liang and Dan Jurafsky. His research focuses on understanding AI systems and their social impact, as well as using NLP to further scientific inquiry. Over the past year, he helped build and organize the Stanford Center for Research on Foundation Models (CRFM).
Sections:
(00:00:00) Intro(00:01:05) How did you get into AI?(00:09:55) Towards Understanding Position Embeddings(00:14:23) Long-Distance Dependencies don’t have to be Long(00:18:55) Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings(00:30:25) Masters Thesis(00:34:05) Start of PhD and work on foundation models(00:42:14) Why were people intested in foundation models(00:46:45) Formation of CRFM(00:51:25) Writing report on foundation models(00:56:33) Challenges in writing report(01:05:45) Response to reception(01:15:35) Goals of CRFM(01:25:43) Current research focus(01:30:35) Interests outside of research(01:33:10) Outro
Papers discussed:
* Towards Understanding Position Embeddings
* Long-Distance Dependencies don’t have to be Long: Simplifying through Provably (Approximately) Optimal Permutations
* Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings
* Generalized Optimal Linear Orders
* On the Opportunities and Risks of Foundation Models
* Reflections on Foundation Models

228,980 Listeners

1,097 Listeners

344 Listeners

4,179 Listeners

200 Listeners

6,078 Listeners

10,064 Listeners

531 Listeners

5,532 Listeners

15,609 Listeners

29,270 Listeners

15 Listeners

27 Listeners