
Sign up to save your podcasts
Or


We discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
By Arize AI5
1515 ratings
We discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

32,267 Listeners

107 Listeners

546 Listeners

1,067 Listeners

112,987 Listeners

231 Listeners

85 Listeners

6,123 Listeners

200 Listeners

763 Listeners

10,224 Listeners

99 Listeners

551 Listeners

5,546 Listeners

98 Listeners