
Sign up to save your podcasts
Or


We discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
By Arize AI5
1313 ratings
We discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

301 Listeners

341 Listeners

232 Listeners

210 Listeners

194 Listeners

301 Listeners

89 Listeners

489 Listeners

133 Listeners

97 Listeners

150 Listeners

209 Listeners

558 Listeners

33 Listeners

41 Listeners