What are the policy-based methods?

The main goal of Reinforcement learning is to find the optimal policyπ\pi^{*} that will maximize the expected cumulative reward. Because Reinforcement Learning is based on the reward hypothesis: all goals can be described as the maximization of the expected cumulative reward.

For instance, in a soccer game (where you’re going to train the agents in two units), the goal is to win the game. We can describe this goal in reinforcement learning as maximizing the number of goals scored (when the ball crosses the goal line) into your opponent’s soccer goals. And minimizing the number of goals in your soccer goals.

Soccer

Value-based, Policy-based, and Actor-critic methods

In the first unit, we saw two methods to find (or, most of the time, approximate) this optimal policyπ\pi^{*}.

Policy based

Consequently, thanks to policy-based methods, we can directly optimize our policyπθ\pi_\theta to output a probability distribution over actionsπθ(as)\pi_\theta(a|s) that leads to the best cumulative return. To do that, we define an objective functionJ(θ)J(\theta), that is, the expected cumulative reward, and we want to find the valueθ\theta that maximizes this objective function.

The difference between policy-based and policy-gradient methods

Policy-gradient methods, what we’re going to study in this unit, is a subclass of policy-based methods. In policy-based methods, the optimization is most of the time on-policy since for each update, we only use data (trajectories) collected by our most recent version ofπθ\pi_\theta.

The difference between these two methods lies on how we optimize the parameterθ\theta:

Before diving more into how policy-gradient methods work (the objective function, policy gradient theorem, gradient ascent, etc.), let’s study the advantages and disadvantages of policy-based methods.

< > Update on GitHub