
Sign up to save your podcasts
Or
Preprocessing pipelines in deep learning aim to provide sufficient data throughput to keep the training processes busy. Maximizing resource utilization is becoming more challenging as the throughput of training processes increases with hardware innovations (e.g., faster GPUs, TPUs, and inter-connects) and advanced parallelization techniques that yield better scalability. At the same time, the amount of training data needed in order to train increasingly complex models is growing. As a consequence of this development, data preprocessing and provisioning are becoming a severe bottleneck in end-to-end deep learning pipelines.
In this interview Alex talks about his in-depth analysis of data preprocessing pipelines from four different machine learning domains. Additionally, he discusses a new perspective on efficiently preparing datasets for end-to-end deep learning pipelines and extract individual trade-offs to optimize throughput, preprocessing time, and storage consumption. Alex and his collaborators have developed an open-source profiling library that can automatically decide on a suitable preprocessing strategy to maximize throughput. By applying their generated insights to real-world use-cases, an increased throughput of 3x to 13x can be obtained compared to an untuned system while keeping the pipeline functionally identical. These findings show the enormous potential of data pipeline tuning.
0:36 - Can you explain to our listeners what is a deep learning pipeline?
1:33 - In this pipepline how does data pre-processing become a bottleneck?
5:40 - In the paper you analyse several different domains, can you go into more details about the domains and pipelines?
6:49 - What are the key insights from your analysis?
8:28 - What are the other insights?
13:23 - Your paper introduces PRESTO the opens source profiling library, can you tell us more about that?
15:56 - How does this compare to other tools in the space?
18:46 - Who will find PRESTO useful?
20:13 - What is the most interesting, unexpected, or challenging lesson you encountered whilst working on this topic?
22:10 - What do you have planned for future research?
Hosted on Acast. See acast.com/privacy for more information.
5
66 ratings
Preprocessing pipelines in deep learning aim to provide sufficient data throughput to keep the training processes busy. Maximizing resource utilization is becoming more challenging as the throughput of training processes increases with hardware innovations (e.g., faster GPUs, TPUs, and inter-connects) and advanced parallelization techniques that yield better scalability. At the same time, the amount of training data needed in order to train increasingly complex models is growing. As a consequence of this development, data preprocessing and provisioning are becoming a severe bottleneck in end-to-end deep learning pipelines.
In this interview Alex talks about his in-depth analysis of data preprocessing pipelines from four different machine learning domains. Additionally, he discusses a new perspective on efficiently preparing datasets for end-to-end deep learning pipelines and extract individual trade-offs to optimize throughput, preprocessing time, and storage consumption. Alex and his collaborators have developed an open-source profiling library that can automatically decide on a suitable preprocessing strategy to maximize throughput. By applying their generated insights to real-world use-cases, an increased throughput of 3x to 13x can be obtained compared to an untuned system while keeping the pipeline functionally identical. These findings show the enormous potential of data pipeline tuning.
0:36 - Can you explain to our listeners what is a deep learning pipeline?
1:33 - In this pipepline how does data pre-processing become a bottleneck?
5:40 - In the paper you analyse several different domains, can you go into more details about the domains and pipelines?
6:49 - What are the key insights from your analysis?
8:28 - What are the other insights?
13:23 - Your paper introduces PRESTO the opens source profiling library, can you tell us more about that?
15:56 - How does this compare to other tools in the space?
18:46 - Who will find PRESTO useful?
20:13 - What is the most interesting, unexpected, or challenging lesson you encountered whilst working on this topic?
22:10 - What do you have planned for future research?
Hosted on Acast. See acast.com/privacy for more information.
284 Listeners
621 Listeners
111,864 Listeners
47 Listeners
28 Listeners
18 Listeners
491 Listeners