
Sign up to save your podcasts
Or
Not every microcontroller can handle artificial intelligence and machine learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models.
In this episode, Sam Fok, CEO at Femtosense, talks about AI/ML on the edge, the company's dual sparsity design, and how the small, low power SPU-001 can augment a host processor.
Not every microcontroller can handle artificial intelligence and machine learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models.
In this episode, Sam Fok, CEO at Femtosense, talks about AI/ML on the edge, the company's dual sparsity design, and how the small, low power SPU-001 can augment a host processor.