AI Post Transformers

Teraio: Cost-Efficient LLM Training via Lifetime-Aware Tensor Offloading


Listen Later

The research introduces Teraio, a novel framework designed to enhance the cost-efficiency and performance of large language model (LLM) training. This framework addresses the significant memory demands of LLMs by intelligently offloading inactive tensors from expensive GPU memory to more affordable PCIe-based solid-state drives (SSDs) and host memory. Teraio employs a lifetime-aware tensor offloading mechanism that profiles tensor activity patterns to generate optimized offloading and prefetching plans, thereby maximizing the utilization of both SSD bandwidth and GPU memory. By leveraging GPUDirect Storage, Teraio enables direct data transfer between GPUs and SSDs, bypassing CPU bottlenecks and improving overall training throughput. Experimental results demonstrate that Teraio significantly outperforms existing offloading solutions like ZeRO-Offload and ZeRO-Infinity, achieving faster training speeds and superior cost efficiency for various LLMs.
...more
View all episodesView all episodes
Download on the App Store

AI Post TransformersBy mcgrof