
Sign up to save your podcasts
Or


A friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
By Mike BreaultA friendly, intuitive tour of Hidden Markov Models (HMMs). Using the relatable 'full trash bin means he's home' metaphor, we explore how to infer unseen states from noisy observations, learn the model parameters with Baum–Welch, and decode the most likely state sequence with the Viterbi algorithm. You’ll see how forward–backward smoothing combines evidence from past and future, and how these ideas power real-world AI—from speech recognition to gene finding and beyond.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC