Arxiv Papers

FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs


Listen Later

The paper proposes an efficient weight-only quantization method for large language models (LLMs) to reduce memory consumption and accelerate inference. The method utilizes a heuristic approach that only uses the model weights of a pre-trained model, without requiring additional fine-tuning. The approach addresses the challenges and issues associated with LLM quantization and achieves higher throughput on the same number of GPUs with minimal accuracy loss.

https://arxiv.org/abs//2308.09723
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

...more
View all episodesView all episodes
Download on the App Store

Arxiv PapersBy Igor Melnyk

  • 5
  • 5
  • 5
  • 5
  • 5

5

3 ratings


More shows like Arxiv Papers

View all
Exchanges by Goldman Sachs

Exchanges

956 Listeners

Odd Lots by Bloomberg

Odd Lots

1,976 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

438 Listeners

The Daily by The New York Times

The Daily

112,847 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

10,064 Listeners

Hard Fork by The New York Times

Hard Fork

5,532 Listeners

UnHerd with Freddie Sayers by UnHerd

UnHerd with Freddie Sayers

213 Listeners

Unsupervised Learning with Jacob Effron by by Redpoint Ventures

Unsupervised Learning with Jacob Effron

51 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

98 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

473 Listeners