Conference Papers Year : 2012

Off-policy Learning in Large-scale POMDP-based Dialogue Systems

Abstract

Reinforcement learning (RL) is now part of the state of the art in the domain of spoken dialogue systems (SDS) optimisation. Most performant RL methods, such as those based on Gaussian Processes, require to test small changes in the policy to assess them as improvements or degradations. This process is called on policy learning. Nevertheless, it can result in system behaviours that are not acceptable by users. Learning algorithms should ideally infer an optimal strategy by observing interactions generated by a non-optimal but acceptable strategy, that is learning off-policy. Such methods usually fail to scale up and are thus not suited for real-world systems. In this contribution, a sample-efficient, online and off-policy RL algorithm is proposed to learn an optimal policy. This algorithm is combined to a compact non-linear value function representation (namely a multilayers perceptron) enabling to handle large scale systems.
Fichier principal
Vignette du fichier
Supelec763.pdf (196.68 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-00684819 , version 1 (05-06-2012)

Identifiers

  • HAL Id : hal-00684819 , version 1

Cite

Lucie Daubigney, Matthieu Geist, Olivier Pietquin. Off-policy Learning in Large-scale POMDP-based Dialogue Systems. ICASSP 2012, Mar 2012, Kyoto, Japan. pp.4989-4992. ⟨hal-00684819⟩
360 View
366 Download

Share

More