Learn Before
A reinforcement learning agent is trained to find the exit in a maze. Two reward models are proposed. Model A gives a reward of +100 for reaching the exit and 0 for every other step. Model B gives +100 for reaching the exit but also a -1 penalty for each step taken. How will the value function derived from Model B most likely differ from the one derived from Model A for states that are not the exit?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Impact of Reward Model Flaws on Value Function Estimation
A reinforcement learning agent is trained to find the exit in a maze. Two reward models are proposed. Model A gives a reward of +100 for reaching the exit and 0 for every other step. Model B gives +100 for reaching the exit but also a -1 penalty for each step taken. How will the value function derived from Model B most likely differ from the one derived from Model A for states that are not the exit?
Diagnosing Undesirable Agent Behavior
In a reinforcement learning framework, it is possible to compute a meaningful long-term value function for a policy even if the reward model consistently provides random, uninformative feedback for every action.