
Sign up to save your podcasts
Or
Building on the discussion of individual decision trees in the prior episode, Shea and Anders shift to one of today’s most popular ensemble models, the Random Forest. At first glance, the algorithm may seem like a brute force approach of simply running hundreds or thousands of decision trees, but it leverages the concept of “bagging” to avoid overfitting and attempt to learn as much as possible from the entire data sets, not just a few key features. We close by covering strengths and weaknesses of this model and providing some real-life examples.
4.6
3131 ratings
Building on the discussion of individual decision trees in the prior episode, Shea and Anders shift to one of today’s most popular ensemble models, the Random Forest. At first glance, the algorithm may seem like a brute force approach of simply running hundreds or thousands of decision trees, but it leverages the concept of “bagging” to avoid overfitting and attempt to learn as much as possible from the entire data sets, not just a few key features. We close by covering strengths and weaknesses of this model and providing some real-life examples.
1,650 Listeners
4,324 Listeners
1,380 Listeners
77,745 Listeners
30,673 Listeners
32,118 Listeners
25,810 Listeners
110,870 Listeners
55,897 Listeners
9,506 Listeners
16 Listeners
11 Listeners
2 Listeners
2,098 Listeners
1,613 Listeners