
Sign up to save your podcasts
Or
This study explores the ability of large language models (LLMs) to predict Transition Relevance Places (TRPs) in spoken conversations. TRPs are points in a speaker’s utterance that signal appropriate opportunities for a listener to respond. While LLMs have shown promise in predicting TRPs, this study finds that they struggle to accurately predict within-turn TRPs, which occur when a listener could respond but chooses not to. The researchers created a novel dataset of participant-labeled within-turn TRPs to evaluate the performance of LLMs on this task. Their findings reveal that current LLMs are limited in their ability to model unscripted spoken interactions and highlight the need for further research to improve their performance in this domain.
https://arxiv.org/pdf/2410.16044
This study explores the ability of large language models (LLMs) to predict Transition Relevance Places (TRPs) in spoken conversations. TRPs are points in a speaker’s utterance that signal appropriate opportunities for a listener to respond. While LLMs have shown promise in predicting TRPs, this study finds that they struggle to accurately predict within-turn TRPs, which occur when a listener could respond but chooses not to. The researchers created a novel dataset of participant-labeled within-turn TRPs to evaluate the performance of LLMs on this task. Their findings reveal that current LLMs are limited in their ability to model unscripted spoken interactions and highlight the need for further research to improve their performance in this domain.
https://arxiv.org/pdf/2410.16044