Short Answer

Critique of an Arbitrary Shaping Function

A reinforcement learning engineer proposes adding a shaping function to an agent's reward to encourage faster learning. Their argument is: 'As long as my shaping function, f(s, a, s'), provides some extra positive reward for actions that seem directionally correct, it will only help the agent and won't change the ultimate goal.' Explain the fundamental flaw in this reasoning. Describe a simple, hypothetical scenario where a seemingly helpful shaping function could cause the agent to learn a final policy that is different from the one that would be optimal with the original rewards alone.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science