Case Study

Analysis of a Fine-Tuning Strategy

A development team is fine-tuning a large language model to function as a recipe generator. To create their training dataset quickly, they use a highly simplified and uniform instruction format for every example, such as 'Generate: Pasta Carbonara' or 'Generate: Chocolate Chip Cookies'. After training, they observe that the model performs perfectly when given prompts that exactly match this format. However, when test users submit more natural or varied requests like 'What's a good recipe for a classic Italian pasta dish with eggs and bacon?' or 'I want to bake some cookies with chocolate chips, can you give me the steps?', the model frequently fails to provide a relevant or correct recipe. What is the most likely cause of this performance gap?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science