
Sign up to save your podcasts
Or


This paper introduces LLaSM, a large multi-modal speech-language model capable of following speech-and-language instructions. The model demonstrates a more natural way for humans to interact with AI. A dataset and code are also provided.
By Igor Melnyk5
33 ratings
This paper introduces LLaSM, a large multi-modal speech-language model capable of following speech-and-language instructions. The model demonstrates a more natural way for humans to interact with AI. A dataset and code are also provided.

977 Listeners

1,993 Listeners

443 Listeners

113,121 Listeners

10,254 Listeners

5,576 Listeners

221 Listeners

51 Listeners

101 Listeners

475 Listeners