Maximum Entropy Semi-Supervised Inverse Reinforcement Learning - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Conference Papers Year :

Maximum Entropy Semi-Supervised Inverse Reinforcement Learning

Michal Valko
Alessandro Lazaric
Mohammad Ghavamzadeh
  • Function : Author
  • PersonId : 868946

Abstract

A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves the ambiguity arising from the fact that a possibly large number of policies could match the expert's behavior. In this paper, we study an AL setting in which in addition to the expert's trajectories, a number of unsupervised trajectories is available. We introduce MESSI, a novel algorithm that combines MaxEnt-IRL with principles coming from semi-supervised learning. In particular, MESSI integrates the unsupervised data into the MaxEnt-IRL framework using a pairwise penalty on trajectories. Empirical results in a highway driving and grid-world problems indicate that MESSI is able to take advantage of the unsupervised trajectories and improve the performance of MaxEnt-IRL.
Fichier principal
Vignette du fichier
messi-TR.pdf (494.85 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01146187 , version 1 (20-07-2015)

Identifiers

  • HAL Id : hal-01146187 , version 1

Cite

Julien Audiffren, Michal Valko, Alessandro Lazaric, Mohammad Ghavamzadeh. Maximum Entropy Semi-Supervised Inverse Reinforcement Learning. International Joint Conference on Artificial Intelligence, Jul 2015, Bueons Aires, Argentina. ⟨hal-01146187⟩
459 View
414 Download

Share

Gmail Facebook Twitter LinkedIn More