Learn Before
Formula for Performance Gap Recovered (PGR)
The formula for calculating the Performance Gap Recovered (PGR) is: In this equation, represents the baseline performance of the weak model, is the performance of the strong model after being supervised by the weak model, and is the strong model's maximum potential performance.

0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Formula for Performance Gap Recovered (PGR)
An AI research team conducts two separate experiments to improve a powerful model's performance by having it learn from a less powerful one. The results are as follows:
- Experiment A: The less powerful model scores 50% on a task. The powerful model, after learning from the less powerful one, scores 70%. The powerful model's maximum possible score on this task is 90%.
- Experiment B: The less powerful model scores 70% on a different task. The powerful model, after learning from the less powerful one, scores 78%. The powerful model's maximum possible score on this task is 80%.
Based on these results, which experiment demonstrates a more effective transfer of knowledge from the less powerful model to the more powerful one, in terms of closing the potential performance gap?
Evaluating Knowledge Transfer Effectiveness
Evaluating Performance Gains in Model Training
Interpretation and Empirical Results of Performance Gap Recovered
Learn After
PGR Calculation Scenario
In an experiment, a researcher observes that the performance of a strong model after being supervised by a weak one (P_weak→strong) is actually lower than the weak model's initial baseline performance (P_weak). Assuming the strong model's maximum potential performance (P_ceiling) is greater than the weak model's baseline, what is the resulting Performance Gap Recovered (PGR) and what does it signify?
Interpreting the PGR Formula's Denominator