
Sign up to save your podcasts
Or
Most AI breakthroughs are driven by deep learning. However, current models and deployment methods suffer from significant limitations, like large energy and memory consumption, high costs, and hyper-specific hardware. Hardware advancements have gotten deep learning deployments this far, but for AI to meet its full potential, a software accelerator approach is required.
Dr. Eli David, a pioneering researcher in deep learning and neural networks, has focused his research on the development of deep learning technologies that improve the real-world deployment of AI systems, and believes the key lies in software. Bringing his research to fruition, Eli has developed DeepCube, a software-based inference accelerator that can be deployed on top of existing hardware (CPU, GPU, ASIC) in both datacenters and edge devices to improve deep learning speed, efficiency, and memory drastically.
For example, some of his results include:
•Increasing the inference speed on a regular CPU to match and surpass that of a GPU, which costs several times more •Increasing the inference speed on a GPU to equal the performance of 10 GPUs
5
197197 ratings
Most AI breakthroughs are driven by deep learning. However, current models and deployment methods suffer from significant limitations, like large energy and memory consumption, high costs, and hyper-specific hardware. Hardware advancements have gotten deep learning deployments this far, but for AI to meet its full potential, a software accelerator approach is required.
Dr. Eli David, a pioneering researcher in deep learning and neural networks, has focused his research on the development of deep learning technologies that improve the real-world deployment of AI systems, and believes the key lies in software. Bringing his research to fruition, Eli has developed DeepCube, a software-based inference accelerator that can be deployed on top of existing hardware (CPU, GPU, ASIC) in both datacenters and edge devices to improve deep learning speed, efficiency, and memory drastically.
For example, some of his results include:
•Increasing the inference speed on a regular CPU to match and surpass that of a GPU, which costs several times more •Increasing the inference speed on a GPU to equal the performance of 10 GPUs
1,830 Listeners
1,032 Listeners
519 Listeners
621 Listeners
175 Listeners
441 Listeners
186 Listeners
112 Listeners
298 Listeners
322 Listeners
267 Listeners
192 Listeners
442 Listeners
351 Listeners
462 Listeners