Learn Before
Evaluating Actions with a State-Value Baseline
In a reinforcement learning scenario, an agent uses the expected future reward from its current state as a baseline to evaluate its actions. This baseline helps determine if an action was better or worse than average for that specific state. Based on the case study below, calculate the resulting value used for the policy update for both Trajectory A and Trajectory B, and explain what each value signifies about the action taken.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Advantage Function Definition
In a reinforcement learning algorithm, a baseline is subtracted from the total reward to stabilize the learning process. Consider two different baseline strategies:
Strategy 1: Use a single, fixed value for the baseline, such as the average total reward calculated over many past episodes. Strategy 2: Use a dynamic value for the baseline that is equal to the expected future reward from the agent's current state.
Why is Strategy 2 generally more effective at reducing the variance of the policy updates compared to Strategy 1?
Evaluating Actions with a State-Value Baseline
Analyzing the Impact of a State-Value Baseline