
Sign up to save your podcasts
Or
Today we’re joined by Murali Akula, a Sr. director of Software Engineering at Qualcomm. In our conversation with Murali, we explore his role at Qualcomm, where he leads the corporate research team focused on the development and deployment of AI onto Snapdragon chips, their unique definition of “full stack”, and how that philosophy permeates into every step of the software development process. We explore the complexities that are unique to doing machine learning on resource constrained devices, some of the techniques that are being applied to get complex models working on mobile devices, and the process for taking these models from research into real-world applications. We also discuss a few more tools and recent developments, including DONNA for neural architecture search, X-Distill, a method of improving the self-supervised training of monocular depth, and the AI Model Effeciency Toolkit, a library that provides advanced quantization and compression techniques for trained neural network models.
The complete show notes for this episode can be found at twimlai.com/go/563
4.7
412412 ratings
Today we’re joined by Murali Akula, a Sr. director of Software Engineering at Qualcomm. In our conversation with Murali, we explore his role at Qualcomm, where he leads the corporate research team focused on the development and deployment of AI onto Snapdragon chips, their unique definition of “full stack”, and how that philosophy permeates into every step of the software development process. We explore the complexities that are unique to doing machine learning on resource constrained devices, some of the techniques that are being applied to get complex models working on mobile devices, and the process for taking these models from research into real-world applications. We also discuss a few more tools and recent developments, including DONNA for neural architecture search, X-Distill, a method of improving the self-supervised training of monocular depth, and the AI Model Effeciency Toolkit, a library that provides advanced quantization and compression techniques for trained neural network models.
The complete show notes for this episode can be found at twimlai.com/go/563
160 Listeners
474 Listeners
294 Listeners
321 Listeners
147 Listeners
197 Listeners
274 Listeners
90 Listeners
97 Listeners
103 Listeners
193 Listeners
64 Listeners
421 Listeners
26 Listeners
31 Listeners