Fighting Boredom in Recommender Systems with Linear Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2018

Fighting Boredom in Recommender Systems with Linear Reinforcement Learning

Romain Warlop
Alessandro Lazaric
Jérémie Mary

Résumé

A common assumption in recommender systems (RS) is the existence of a best fixed recommendation strategy. Such strategy may be simple and work at the item level (e.g., in multi-armed bandit it is assumed one best fixed arm/item exists) or implement more sophisticated RS (e.g., the objective of A/B testing is to find the best fixed RS and execute it thereafter). We argue that this assumption is rarely verified in practice, as the recommendation process itself may impact the user's preferences. For instance, a user may get bored by a strategy, while she may gain interest again, if enough time passed since the last time that strategy was used. In this case, a better approach consists in alternating different solutions at the right frequency to fully exploit their potential. In this paper, we first cast the problem as a Markov decision process, where the rewards are a linear function of the recent history of actions, and we show that a policy considering the long-term influence of the recommendations may outperform both fixed-action and contextual greedy policies. We then introduce an extension of the UCRL algorithm (LINUCRL) to effectively balance exploration and exploitation in an unknown environment, and we derive a regret bound that is independent of the number of states. Finally, we empirically validate the model assumptions and the algorithm in a number of realistic scenarios.
Fichier principal
Vignette du fichier
WARLOP-NIPS18.pdf (453.58 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01915468 , version 1 (07-11-2018)

Identifiants

Citer

Romain Warlop, Alessandro Lazaric, Jérémie Mary. Fighting Boredom in Recommender Systems with Linear Reinforcement Learning. Neural Information Processing Systems, Dec 2018, Montreal, Canada. ⟨10.5555/3326943.3327105⟩. ⟨hal-01915468⟩
359 Consultations
289 Téléchargements

Altmetric

Partager

More