Pre-training generalist brokers utilizing offline reinforcement studying – Google AI Weblog
Reinforcement studying (RL) algorithms can be taught expertise to resolve decision-making duties like playing games, enabling robots to pick up objects, and even optimizing microchip designs. Nevertheless, working RL algorithms in the actual world requires costly active data collection. Pre-training on various datasets has confirmed to allow data-efficient fine-tuning for particular person downstream duties in natural language processing (NLP) and vision issues. In the identical manner that BERT or GPT-3 fashions present general-purpose initialization for NLP, giant RL–pre-trained fashions may present general-purpose initialization for decision-making. So, we ask the query: Can we allow related pre-training to speed up RL strategies and create a general-purpose “spine” for environment friendly RL throughout varied duties?
In “Offline Q-learning on Diverse Multi-Task Data Both Scales and Generalizes”, to be revealed at ICLR 2023, we talk about how we scaled offline RL, which can be utilized to coach worth features on beforehand collected static datasets, to offer such a normal pre-training methodology. We exhibit that Scaled Q-Studying utilizing a various dataset is ample to be taught representations that facilitate speedy switch to novel duties and quick on-line studying on new variations of a activity, bettering considerably over current illustration studying approaches and even Transformer-based strategies that use a lot bigger fashions.
Scaled Q-learning: Multi-task pre-training with conservative Q-learning
To supply a general-purpose pre-training method, offline RL must be scalable, permitting us to pre-train on information throughout completely different duties and make the most of expressive neural community fashions to accumulate highly effective pre-trained backbones, specialised to particular person downstream duties. We primarily based our offline RL pre-training methodology on conservative Q-learning (CQL), a easy offline RL methodology that mixes customary Q-learning updates with a further regularizer that minimizes the worth of unseen actions. With discrete actions, the CQL regularizer is equal to a normal cross-entropy loss, which is a straightforward, one-line modification on customary deep Q-learning. A couple of essential design choices made this doable:
- Neural community measurement: We discovered that multi-game Q-learning required giant neural community architectures. Whereas prior strategies usually used relatively shallow convolutional networks, we discovered that fashions as giant as a ResNet 101 led to important enhancements over smaller fashions.
- Neural community structure: To be taught pre-trained backbones which can be helpful for brand new video games, our remaining structure makes use of a shared neural community spine, with separate 1-layer heads outputting Q-values of every sport. This design avoids interference between the video games throughout pre-training, whereas nonetheless offering sufficient information sharing to be taught a single shared illustration. Our shared imaginative and prescient spine additionally utilized a learned position embedding (akin to Transformer fashions) to maintain observe of spatial data within the sport.
- Representational regularization: Latest work has noticed that Q-learning tends to undergo from representational collapse issues, the place even giant neural networks can fail to be taught efficient representations. To counteract this difficulty, we leverage our prior work to normalize the final layer options of the shared a part of the Q-network. Moreover, we utilized a categorical distributional RL loss for Q-learning, which is thought to offer richer representations that enhance downstream activity efficiency.
The multi-task Atari benchmark
We consider our method for scalable offline RL on a set of Atari games, the place the aim is to coach a single RL agent to play a group of video games utilizing heterogeneous information from low-quality (i.e., suboptimal) gamers, after which use the ensuing community spine to shortly be taught new variations in pre-training video games or utterly new video games. Coaching a single coverage that may play many alternative Atari video games is tough sufficient even with customary online deep RL strategies, as every sport requires a distinct technique and completely different representations. Within the offline setting, some prior works, similar to multi-game decision transformers, proposed to dispense with RL fully, and as a substitute make the most of conditional imitation learning in an try and scale with giant neural community architectures, similar to transformers. Nevertheless, on this work, we present that this type of multi-game pre-training might be completed successfully through RL by using CQL together with just a few cautious design choices, which we describe beneath.
Scalability on coaching video games
We consider the Scaled Q-Studying methodology’s efficiency and scalability utilizing two information compositions: (1) close to optimum information, consisting of all of the coaching information showing in replay buffers of earlier RL runs, and (2) low high quality information, consisting of information from the primary 20% of the trials within the replay buffer (i.e., solely information from extremely suboptimal insurance policies). In our outcomes beneath, we evaluate Scaled Q-Studying with an 80-million parameter mannequin to multi-game choice transformers (DT) with both 40-million or 80-million parameter fashions, and a behavioral cloning (imitation studying) baseline (BC). We observe that Scaled Q-Studying is the one method that improves over the offline information, attaining about 80% of human normalized efficiency.
Additional, as proven beneath, Scaled Q-Studying improves by way of efficiency, nevertheless it additionally enjoys favorable scaling properties: simply as how the efficiency of pre-trained language and imaginative and prescient fashions improves as community sizes get larger, having fun with what is often referred as “power-law scaling”, we present that the efficiency of Scaled Q-learning enjoys related scaling properties. Whereas this can be unsurprising, this type of scaling has been elusive in RL, with efficiency usually deteriorating with bigger mannequin sizes. This means that Scaled Q-Studying together with the above design decisions higher unlocks the flexibility of offline RL to make the most of giant fashions.
Fantastic-tuning to new video games and variations
To guage fine-tuning from this offline initialization, we contemplate two settings: (1) fine-tuning to a brand new, fully unseen sport with a small quantity of offline information from that sport, comparable to 2M transitions of gameplay, and (2) fine-tuning to a brand new variant of the video games with on-line interplay. The fine-tuning from offline gameplay information is illustrated beneath. Observe that this situation is usually extra favorable to imitation-style strategies, Choice Transformer and behavioral cloning, because the offline information for the brand new video games is of comparatively high-quality. Nonetheless, we see that most often Scaled Q-learning improves over various approaches (80% on common), in addition to devoted illustration studying strategies, similar to MAE or CPC, which solely use the offline information to be taught visible representations slightly than worth features.
Within the on-line setting, we see even bigger enhancements from pre-training with Scaled Q-learning. On this case, illustration studying strategies like MAE yield minimal enchancment throughout on-line RL, whereas Scaled Q-Studying can efficiently combine prior data in regards to the pre-training video games to considerably enhance the ultimate rating after 20k on-line interplay steps.
These outcomes exhibit that pre-training generalist worth operate backbones with multi-task offline RL can considerably enhance efficiency of RL on downstream duties, each in offline and on-line mode. Observe that these fine-tuning duties are fairly tough: the varied Atari video games, and even variants of the identical sport, differ considerably in look and dynamics. For instance, the goal blocks in Breakout disappear within the variation of the sport as proven beneath, making management tough. Nevertheless, the success of Scaled Q-learning, notably as in comparison with visible illustration studying methods, similar to MAE and CPC, means that the mannequin is in truth studying some illustration of the sport dynamics, slightly than merely offering higher visible options.
Conclusion and takeaways
We introduced Scaled Q-Studying, a pre-training methodology for scaled offline RL that builds on the CQL algorithm, and demonstrated the way it permits environment friendly offline RL for multi-task coaching. This work made preliminary progress in direction of enabling extra sensible real-world coaching of RL brokers as a substitute for expensive and complicated simulation-based pipelines or large-scale experiments. Maybe in the long term, related work will result in typically succesful pre-trained RL brokers that develop broadly relevant exploration and interplay expertise from large-scale offline pre-training. Validating these outcomes on a broader vary of extra lifelike duties, in domains similar to robotics (see some initial results) and NLP, is a vital course for future analysis. Offline RL pre-training has a number of potential, and we anticipate that we are going to see many advances on this space in future work.
Acknowledgements
This work was completed by Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, and Sergey Levine. Particular because of Sherry Yang, Ofir Nachum, and Kuang-Huei Lee for assist with the multi-game choice transformer codebase for analysis and the multi-game Atari benchmark, and Tom Small for illustrations and animation.