Introducing n-Step Temporal-Distinction Strategies | by Oliver S | Dec, 2024


Dissecting “Reinforcement Studying” by Richard S. Sutton with customized Python implementations, Episode V

In our earlier publish, we wrapped up the introductory collection on basic reinforcement studying (RL) strategies by exploring Temporal-Distinction (TD) studying. TD strategies merge the strengths of Dynamic Programming (DP) and Monte Carlo (MC) strategies, leveraging their greatest options to type among the most necessary RL algorithms, similar to Q-learning.

Constructing on that basis, this publish delves into n-step TD studying, a flexible method launched in Chapter 7 of Sutton’s ebook [1]. This methodology bridges the hole between classical TD and MC strategies. Like TD, n-step strategies use bootstrapping (leveraging prior estimates), however additionally they incorporate the following n rewards, providing a singular mix of short-term and long-term studying. In a future publish, we’ll generalize this idea even additional with eligibility traces.

We’ll observe a structured method, beginning with the prediction downside earlier than shifting to management. Alongside the best way, we’ll:

  • Introduce n-step Sarsa,
  • Lengthen it to off-policy studying,
  • Discover the n-step tree backup algorithm, and
  • Current a unifying perspective with n-step Q(σ).

As at all times, yow will discover all accompanying code on GitHub. Let’s dive in!

Leave a Reply

Your email address will not be published. Required fields are marked *