
Sign up to save your podcasts
Or


Large language models can function as weak learners in boosting algorithms for tabular data classification. Properly sampled text descriptions of data samples can produce a summary that outperforms traditional tree-based boosting.
https://arxiv.org/abs//2306.14101
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
Large language models can function as weak learners in boosting algorithms for tabular data classification. Properly sampled text descriptions of data samples can produce a summary that outperforms traditional tree-based boosting.
https://arxiv.org/abs//2306.14101
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

958 Listeners

1,977 Listeners

438 Listeners

112,858 Listeners

10,073 Listeners

5,535 Listeners

214 Listeners

51 Listeners

98 Listeners

473 Listeners