Learn Before
Analyzing a Flawed Reward Shaping Implementation
Based on the provided scenario, identify the fundamental error in the engineer's implementation of the shaping reward and explain why this error could lead the agent to learn a suboptimal policy, even if the potential function itself is perfectly designed.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Value-Based Reward Shaping Formula
A reinforcement learning engineer wants to add an extra reward signal, denoted as a function
f, to an agent's learning process to encourage more efficient exploration. They have access to a functionΦ(s)which provides a numerical estimate of a state's value, and a discount factorγ. To guarantee that this additional reward signal does not alter the agent's optimal long-term behavior, which of the following structures must the functionfhave for a transition from states_ttos_{t+1}?Analyzing a Flawed Reward Shaping Implementation
Validating a Potential-Based Shaping Function