A Self-Made Agent Based on Action-Selection
Abstract
Some agents have to face multiple objectives simultaneously. In such cases, and considering partially observable environments, classical Reinforcement Learning (RL) is prone to fall in pretty low local optima, only learning straightforward behaviors. We present here a method that tries to identify and learn independent ``basic'' behaviors solving separate tasks the agent has to face. Using a combination of these behaviors (an action-selection algorithm), the agent is then able to efficiently deal with various complex goals in complex environments.