
Sign up to save your podcasts
Or
Building on the discussion of individual decision trees in the prior episode, Shea and Anders shift to one of today’s most popular ensemble models, the Random Forest. At first glance, the algorithm may seem like a brute force approach of simply running hundreds or thousands of decision trees, but it leverages the concept of “bagging” to avoid overfitting and attempt to learn as much as possible from the entire data sets, not just a few key features. We close by covering strengths and weaknesses of this model and providing some real-life examples.
4.6
3131 ratings
Building on the discussion of individual decision trees in the prior episode, Shea and Anders shift to one of today’s most popular ensemble models, the Random Forest. At first glance, the algorithm may seem like a brute force approach of simply running hundreds or thousands of decision trees, but it leverages the concept of “bagging” to avoid overfitting and attempt to learn as much as possible from the entire data sets, not just a few key features. We close by covering strengths and weaknesses of this model and providing some real-life examples.
1,642 Listeners
3,156 Listeners
4,335 Listeners
30,915 Listeners
26,359 Listeners
1,778 Listeners
111,352 Listeners
55,993 Listeners
9,552 Listeners
15 Listeners
5,905 Listeners
12 Listeners
2 Listeners
9,041 Listeners
14,726 Listeners