
Sign up to save your podcasts
Or


In the world of machine learning, there’s been a notable trade-off between accuracy and intelligibility. Either the models are accurate but difficult to make sense of, or easy to understand but prone to error. That’s why Dr. Rich Caruana, Principal Researcher at Microsoft Research, has spent a good part of his career working to make the simple more accurate and the accurate more intelligible.
Today, Dr. Caruana talks about how the rise of deep neural networks has made understanding machine predictions more difficult for humans, and discusses an interesting class of smaller, more interpretable models that may help to make the black box nature of machine learning more transparent.
By Researchers across the Microsoft research community4.8
8080 ratings
In the world of machine learning, there’s been a notable trade-off between accuracy and intelligibility. Either the models are accurate but difficult to make sense of, or easy to understand but prone to error. That’s why Dr. Rich Caruana, Principal Researcher at Microsoft Research, has spent a good part of his career working to make the simple more accurate and the accurate more intelligible.
Today, Dr. Caruana talks about how the rise of deep neural networks has made understanding machine predictions more difficult for humans, and discusses an interesting class of smaller, more interpretable models that may help to make the black box nature of machine learning more transparent.

341 Listeners

155 Listeners

213 Listeners

306 Listeners

90 Listeners

505 Listeners

477 Listeners

58 Listeners

133 Listeners

95 Listeners

124 Listeners

589 Listeners

26 Listeners

35 Listeners

136 Listeners