
Sign up to save your podcasts
Or


Most AI breakthroughs are driven by deep learning. However, current models and deployment methods suffer from significant limitations, like large energy and memory consumption, high costs, and hyper-specific hardware. Hardware advancements have gotten deep learning deployments this far, but for AI to meet its full potential, a software accelerator approach is required.
Dr. Eli David, a pioneering researcher in deep learning and neural networks, has focused his research on the development of deep learning technologies that improve the real-world deployment of AI systems, and believes the key lies in software. Bringing his research to fruition, Eli has developed DeepCube, a software-based inference accelerator that can be deployed on top of existing hardware (CPU, GPU, ASIC) in both datacenters and edge devices to improve deep learning speed, efficiency, and memory drastically.
For example, some of his results include:
•Increasing the inference speed on a regular CPU to match and surpass that of a GPU, which costs several times more •Increasing the inference speed on a GPU to equal the performance of 10 GPUs
By Neil C. Hughes5
198198 ratings
Most AI breakthroughs are driven by deep learning. However, current models and deployment methods suffer from significant limitations, like large energy and memory consumption, high costs, and hyper-specific hardware. Hardware advancements have gotten deep learning deployments this far, but for AI to meet its full potential, a software accelerator approach is required.
Dr. Eli David, a pioneering researcher in deep learning and neural networks, has focused his research on the development of deep learning technologies that improve the real-world deployment of AI systems, and believes the key lies in software. Bringing his research to fruition, Eli has developed DeepCube, a software-based inference accelerator that can be deployed on top of existing hardware (CPU, GPU, ASIC) in both datacenters and edge devices to improve deep learning speed, efficiency, and memory drastically.
For example, some of his results include:
•Increasing the inference speed on a regular CPU to match and surpass that of a GPU, which costs several times more •Increasing the inference speed on a GPU to equal the performance of 10 GPUs

1,283 Listeners

536 Listeners

1,654 Listeners

1,099 Listeners

1,850 Listeners

110 Listeners

303 Listeners

344 Listeners

269 Listeners

199 Listeners

10,063 Listeners

5,531 Listeners

350 Listeners

99 Listeners

633 Listeners

1 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners

0 Listeners