
Sign up to save your podcasts
Or


This paper introduces LLaSM, a large multi-modal speech-language model capable of following speech-and-language instructions. The model demonstrates a more natural way for humans to interact with AI. A dataset and code are also provided.
By Igor Melnyk5
33 ratings
This paper introduces LLaSM, a large multi-modal speech-language model capable of following speech-and-language instructions. The model demonstrates a more natural way for humans to interact with AI. A dataset and code are also provided.

958 Listeners

1,977 Listeners

438 Listeners

112,858 Listeners

10,073 Listeners

5,535 Listeners

214 Listeners

51 Listeners

98 Listeners

473 Listeners