Activity (Process)

Inference Process with a Fine-Tuned Model

Once a model's parameters have been optimized through fine-tuning, the resulting model, denoted as Fω~,θ~()F_{\tilde{\omega},\tilde{\theta}}(\cdot), can be used for inference on new, unseen data. For instance, in a text classification task, the new text is first tokenized into a sequence of tokens, represented as xnew\mathbf{x}_{\mathrm{new}}. This token sequence is then fed into the fine-tuned model, which processes it to generate a probability distribution over the possible classes.

Image 0

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Related