Multiple Choice

A reinforcement learning agent is operating in an environment where taking a specific action in a given state results in a transition to a new state. The environment's original reward for this transition is -0.5. To guide the agent more effectively, a shaping function is added, which provides an additional reward value of +2.0 for this same transition. According to the standard formulation for reward shaping, what is the total transformed reward the agent receives?

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science