Cooperation in stochastic games through communication
Résumé
We describe a process of reinforcement learning in two-agent general-sum stochastic games under imperfect observability of moves and payoffs. In practice, it is known that using naive Q-learning, agents can learn equilibrium policies under the discounted reward criterion although these may be arbitrarily worse for both the agents than a non-equilibrium policy, in the absence of global optima. We aim for Pareto-efficiency in policies, in which agents enjoy higher payoffs than in an equilibrium and show agents may employ naive Q-learning with the addition of communication and a payoff interpretation rule, to achieve this. In principle, our objective is to shift the focus of the learning from equilibria (to which solipsistic algorithms converge) to non-equilibria by transforming the latter to equilibria.