Short Answer

Analyzing a Flawed Policy Gradient Derivation

A student is attempting to derive the policy gradient objective function. Their derivation is shown below. Identify the specific mathematical error in their steps and explain why this error introduces a fundamental problem that the standard derivation avoids.

Derivation Steps:

  1. Objective Function: J(θ)=τPrθ(τ)R(τ)J(\theta) = \sum_{\tau} \text{Pr}_{\theta}(\tau)R(\tau)
  2. Gradient Calculation: J(θ)θ=τ[Prθ(τ)θR(τ)+Prθ(τ)R(τ)θ]\frac{\partial J(\theta)}{\partial \theta} = \sum_{\tau} \left[ \frac{\partial \text{Pr}_{\theta}(\tau)}{\partial \theta} R(\tau) + \text{Pr}_{\theta}(\tau) \frac{\partial R(\tau)}{\partial \theta} \right]
  3. Conclusion: The student stops here, concluding that for this gradient to be useful, the reward function R(τ)R(\tau) must be differentiable with respect to the policy parameters θ\theta.

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science