
Sign up to save your podcasts
Or
In this NeurIPSs interview, we speak with Laura Ruis about her research on the ability of language models to interpret language in context. She has designed a simple task to evaluate the performance of widely used state-of-the-art language models and has found that they struggle to make pragmatic inferences (implicatures). Tune in to learn more about her findings and what they mean for the future of conversational AI.
Laura Ruis
https://www.lauraruis.com/
https://twitter.com/LauraRuis
BLOOM
https://bigscience.huggingface.co/blog/bloom
Large language models are not zero-shot communicators [Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette]
https://arxiv.org/abs/2210.14986
[Zhang et al] OPT: Open Pre-trained Transformer Language Models
https://arxiv.org/pdf/2205.01068.pdf
[Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
https://arxiv.org/pdf/2210.15303.pdf
[Gary Marcus] Horse rides astronaut
https://garymarcus.substack.com/p/horse-rides-astronaut
[Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about
https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/
[Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
https://dl.acm.org/doi/10.1145/3442188.3445922
[janus] Simulators (Less Wrong)
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
4.7
8585 ratings
In this NeurIPSs interview, we speak with Laura Ruis about her research on the ability of language models to interpret language in context. She has designed a simple task to evaluate the performance of widely used state-of-the-art language models and has found that they struggle to make pragmatic inferences (implicatures). Tune in to learn more about her findings and what they mean for the future of conversational AI.
Laura Ruis
https://www.lauraruis.com/
https://twitter.com/LauraRuis
BLOOM
https://bigscience.huggingface.co/blog/bloom
Large language models are not zero-shot communicators [Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, Edward Grefenstette]
https://arxiv.org/abs/2210.14986
[Zhang et al] OPT: Open Pre-trained Transformer Language Models
https://arxiv.org/pdf/2205.01068.pdf
[Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
https://arxiv.org/pdf/2210.15303.pdf
[Gary Marcus] Horse rides astronaut
https://garymarcus.substack.com/p/horse-rides-astronaut
[Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about
https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/
[Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
https://dl.acm.org/doi/10.1145/3442188.3445922
[janus] Simulators (Less Wrong)
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
481 Listeners
441 Listeners
298 Listeners
192 Listeners
198 Listeners
298 Listeners
428 Listeners
121 Listeners
201 Listeners
50 Listeners
75 Listeners
491 Listeners
31 Listeners
22 Listeners
43 Listeners