High Variance in Policy Gradient Estimates
A significant drawback of the policy gradient approach is the high variance associated with its gradient estimates. This variability introduces noise into the learning process, which can render it unstable and inefficient.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Learn After
Baseline Method for Policy Gradient Variance Reduction
Total Reward (Return)
An agent is trained using a policy gradient method where the policy is updated based on the total reward of an entire trajectory. Consider two different trajectories that result in the same total reward:
- Trajectory A: The agent receives a small, consistent reward of +1 at each of 10 steps, for a total reward of +10.
- Trajectory B: The agent receives a reward of 0 for the first 9 steps and a large reward of +10 at the final step, for a total reward of +10.
Which of the following statements best analyzes the impact of these reward distributions on the policy update?
Diagnosing Unstable Reinforcement Learning Training
True or False: In a basic policy gradient method, if an agent completes a trajectory with a high positive total reward, the learning algorithm will reinforce every action taken during that trajectory, even those that were suboptimal or did not directly contribute to the final outcome.
Impact of Reward Scale Variation on Policy Gradient Variance