%0 Conference Proceedings %T Risk-Aversion in Multi-armed Bandits %+ Sequential Learning (SEQUEL) %A Sani, Amir %A Lazaric, Alessandro %A Munos, Rémi %< avec comité de lecture %B NIPS - Twenty-Sixth Annual Conference on Neural Information Processing Systems %C Lake Tahoe, United States %8 2012-12 %D 2012 %Z Statistics [stat]/Machine Learning [stat.ML]Conference papers %X Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk--aversion where the objective is to compete against the arm with the best risk--return trade--off. This setting proves to be more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we define two algorithms, investigate their theoretical guarantees, and report preliminary empirical results. %G English %2 https://inria.hal.science/hal-00772609/document %2 https://inria.hal.science/hal-00772609/file/risk-bandit-cr.pdf %L hal-00772609 %U https://inria.hal.science/hal-00772609 %~ UNIV-LILLE3 %~ CNRS %~ INRIA %~ IRISA %~ INRIA-LILLE %~ LAGIS %~ OPENAIRE %~ INRIA_TEST %~ TESTALAIN1 %~ INRIA2 %~ UR1-MATH-STIC %~ UR1-UFR-ISTIC %~ INRIA-300009 %~ UR1-MATH-NUM