
Sign up to save your podcasts
Or


In this NeurIPSs interview, we speak with Laura Ruis about her research on the ability of language models to interpret language in context. She has designed a simple task to evaluate the performance of widely used state-of-the-art language models and has found that they struggle to make pragmatic inferences (implicatures). Tune in to learn more about her findings and what they mean for the future of conversational AI.
Laura Ruis
https://www.lauraruis.com/
https://twitter.com/LauraRuis
BLOOM
https://bigscience.huggingface.co/blog/bloom
Large language models are not zero-shot communicators [Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette]
https://arxiv.org/abs/2210.14986
[Zhang et al] OPT: Open Pre-trained Transformer Language Models
https://arxiv.org/pdf/2205.01068.pdf
[Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
https://arxiv.org/pdf/2210.15303.pdf
[Gary Marcus] Horse rides astronaut
https://garymarcus.substack.com/p/horse-rides-astronaut
[Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about
https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/
[Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
https://dl.acm.org/doi/10.1145/3442188.3445922
[janus] Simulators (Less Wrong)
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
By Machine Learning Street Talk (MLST)4.7
8585 ratings
In this NeurIPSs interview, we speak with Laura Ruis about her research on the ability of language models to interpret language in context. She has designed a simple task to evaluate the performance of widely used state-of-the-art language models and has found that they struggle to make pragmatic inferences (implicatures). Tune in to learn more about her findings and what they mean for the future of conversational AI.
Laura Ruis
https://www.lauraruis.com/
https://twitter.com/LauraRuis
BLOOM
https://bigscience.huggingface.co/blog/bloom
Large language models are not zero-shot communicators [Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette]
https://arxiv.org/abs/2210.14986
[Zhang et al] OPT: Open Pre-trained Transformer Language Models
https://arxiv.org/pdf/2205.01068.pdf
[Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
https://arxiv.org/pdf/2210.15303.pdf
[Gary Marcus] Horse rides astronaut
https://garymarcus.substack.com/p/horse-rides-astronaut
[Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about
https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/
[Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
https://dl.acm.org/doi/10.1145/3442188.3445922
[janus] Simulators (Less Wrong)
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators

478 Listeners

433 Listeners

302 Listeners

212 Listeners

197 Listeners

306 Listeners

71 Listeners

131 Listeners

49 Listeners

95 Listeners

209 Listeners

586 Listeners

33 Listeners

22 Listeners

39 Listeners