
Sign up to save your podcasts
Or


Ervin Dervishaj, a PhD student at the University of Copenhagen, discusses his research on disentangled representation learning in recommender systems, finding that while disentanglement strongly correlates with interpretability, it doesn't consistently improve recommendation performance. The conversation explores how disentanglement acts as a regularizer that can enhance user trust and interpretability at the potential cost of some accuracy, and touches on the future of large language models in denoising user interaction data.
By Kyle Polich4.4
475475 ratings
Ervin Dervishaj, a PhD student at the University of Copenhagen, discusses his research on disentangled representation learning in recommender systems, finding that while disentanglement strongly correlates with interpretability, it doesn't consistently improve recommendation performance. The conversation explores how disentanglement acts as a regularizer that can enhance user trust and interpretability at the potential cost of some accuracy, and touches on the future of large language models in denoising user interaction data.

32,233 Listeners

30,670 Listeners

288 Listeners

1,106 Listeners

630 Listeners

583 Listeners

309 Listeners

346 Listeners

210 Listeners

204 Listeners

313 Listeners

100 Listeners

550 Listeners

104 Listeners

227 Listeners