
Sign up to save your podcasts
Or
This paper explores a novel framework called LR-Recsys, which enhances black-box recommender systems by integrating Large Language Model (LLM)-based contrastive explanations. Instead of relying solely on past user behavior or explicit product details, LR-Recsys uses LLMs to generate both positive and negative reasons why a user might or might not like a product. These generated explanations, converted into numerical representations, are then fed into a deep neural network, improving the system's ability to predict user preferences more accurately while also providing valuable insights to users, sellers, and the platform. The authors demonstrate empirically and theoretically that these performance gains stem primarily from the LLMs' reasoning capabilities, particularly for more challenging or uncertain recommendation scenarios.
This paper explores a novel framework called LR-Recsys, which enhances black-box recommender systems by integrating Large Language Model (LLM)-based contrastive explanations. Instead of relying solely on past user behavior or explicit product details, LR-Recsys uses LLMs to generate both positive and negative reasons why a user might or might not like a product. These generated explanations, converted into numerical representations, are then fed into a deep neural network, improving the system's ability to predict user preferences more accurately while also providing valuable insights to users, sellers, and the platform. The authors demonstrate empirically and theoretically that these performance gains stem primarily from the LLMs' reasoning capabilities, particularly for more challenging or uncertain recommendation scenarios.