Learn Before
A key advantage of implementing a potential-based reward shaping function is that it fundamentally alters the optimal set of actions an agent should take, thereby simplifying complex problems with sparse rewards.
0
1
Tags
Data Science
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Comprehension in Revised Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Reward Shaping Formula
An agent is being trained to navigate a complex maze. It receives a large positive reward (+100) only upon reaching the exit, and a reward of 0 for all other steps. To accelerate learning in this environment with delayed feedback, a developer decides to add an additional, intermediate reward at each step. Which of the following intermediate reward strategies is most likely to guide the agent effectively toward the exit without inadvertently changing the optimal path?
Analyzing Reward Shaping Strategies for Text Summarization
A key advantage of implementing a potential-based reward shaping function is that it fundamentally alters the optimal set of actions an agent should take, thereby simplifying complex problems with sparse rewards.