Learn Before
Evaluating Agent Action Sequences
Based on the principle of maximizing the total reward accumulated from the current time step onwards, which path should the agent prefer? Justify your answer by calculating the cumulative future reward for each path.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An agent interacts with an environment over five time steps and receives the following sequence of rewards, starting from time step 1:
[-1, +3, +10, -5, +2]. What is the cumulative future reward (also known as the return) calculated from time step 3?An agent is at time step
t. It must choose between two actions, Action A and Action B. If it chooses Action A, the sequence of rewards it will receive from time steptuntil the end of the episode is[+1, +1, +10]. If it chooses Action B, the sequence of rewards it will receive is[+5, -2, +5]. To maximize its total accumulated reward from this point forward, which action should the agent choose and why?Evaluating Agent Action Sequences