Multiple Choice

A team is fine-tuning a language model to act as a programming assistant that writes code. For each programming problem, the model generates a block of code. The fine-tuning process involves running the generated code against a set of predefined tests. If the code passes all the tests, the model receives a high reward. If it fails any test, it receives a low reward. The structure, style, or efficiency of the code itself is not directly evaluated for the reward signal. Which principle of model fine-tuning does this scenario best exemplify?

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science