transformerXL_PPO_JAX - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Logiciel Année : 2024

transformerXL_PPO_JAX

Gautier Hamon

Résumé

This repository provides a JAX implementation of TranformerXL with PPO in a RL setup following : "Stabilizing Transformers for Reinforcement Learning" from Parisotto et al. (https://arxiv.org/abs/1910.06764). The code uses the PureJaxRL template for PPO and copied some of the code from Huggingface transformerXL transferring it to JAX. We also took inspiration from the pytorch code in https://github.com/MarcoMeter/episodic-transformer-memory-ppo, which has some simplification of gradient propagation and positional encoding compared to transformerXL as it is described in the original paper (https://arxiv.org/abs/1901.02860). The training handles Gymnax environment. We also tested it on Craftax, on which it beat the baseline presented in the paper (https://arxiv.org/abs/2402.16801) including PPO-RNN, training with unsupervised environment design and intrinsic motivation. Notably we reach the 3rd level (the sewer) and obtain several advanced advancements, which was not achieved by the methods presented in the paper. See Craftax Results for more informations. The training of a 5M transformer on craftax for 1e9 steps (with 1024 environments) takes about 6h30 on a single A100.

Dates et versions

hal-04659863 , version 1 (24-07-2024)

Licence

Identifiants

  • HAL Id : hal-04659863 , version 1

Citer

Gautier Hamon. transformerXL_PPO_JAX. 2024. ⟨hal-04659863⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More