Optimize models to run on edge devices (e.g. Jetsons) with 3-20x faster inference and 80% less memory requirements
š Links mentioned in this episode:
⢠https://www.linkedin.com/in/pranavnair311/
⢠https://www.loom.com/share/d8fbb4faef87493c9806610fff6ff86c?sid=10b54831-159b-4157-95b7-8b9f7a5c8d8e
⢠https://www.linkedin.com/in/viraatdas/
⢠https://www.ycombinator.com/launches/Muo-exla-run-datacenter-models-on-edge-devices