Zap Q-Learning With Nonlinear Function Approximation
Abstract
Zap Q-learning is a recent class of reinforcement learning algorithms, motivatedprimarily as a means to accelerate convergence. Stability theory has been absentoutside of two restrictive classes: the tabular setting, and optimal stopping. Thispaper introduces a new framework for analysis of a more general class of recursivealgorithms known as stochastic approximation. Based on this general theory, it isshown that Zap Q-learning is consistent under a non-degeneracy assumption, evenwhen the function approximation architecture is nonlinear. Zap Q-learning withneural network function approximation emerges as a special case, and is testedon examples from OpenAI Gym. Based on multiple experiments with a range ofneural network sizes, it is found that the new algorithms converge quickly and arerobust to choice of function approximation architecture.