
Sign up to save your podcasts
Or


Prompt Cache is a method for accelerating inference in large language models by reusing attention states of frequently occurring text segments. It significantly reduces latency without compromising output accuracy or requiring modifications to the model parameters.
https://arxiv.org/abs//2311.04934
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Prompt Cache is a method for accelerating inference in large language models by reusing attention states of frequently occurring text segments. It significantly reduces latency without compromising output accuracy or requiring modifications to the model parameters.
https://arxiv.org/abs//2311.04934
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

954 Listeners

1,971 Listeners

438 Listeners

112,664 Listeners

10,051 Listeners

5,531 Listeners

214 Listeners

51 Listeners

93 Listeners

473 Listeners