Inverse Kinematics On-line Learning: a Kernel-Based Policy-Gradient approach
Abstract
In machine learning, ``kernel methods'' give a consistent framework for applying the perceptron algorithm to non-linear problems. In reinforcement learning, an analog of the perceptron delta-rule can be derived from the "policy-gradient" approach proposed by Williams in 1992 in the framework of stochastic neural networks. Despite its generality and straighforward applicability to continuous command problems, quite few developments of the method had been proposed since. Here we present an account of the use of a kernel transformation of the perception space for the \emph{on-line} learning of a motor command, in the case of eye orientation and multi-joint arm control. We show first that such a setting allows the system to solve non-linear problems, like the log-like resolution of a foveated retina, or the transformation from a cartesian perception space to the ``angular'' command of the multi-joint arm. More interestingly, the on-line recurrent learning we propose is simple and fully operant in changing environments, and allows for constant improvements of the politics, on the basis of simple and measurables error terms.
Domains
Artificial Intelligence [cs.AI]Origin | Files produced by the author(s) |
---|
Loading...