Learn Before
  • Improving LLM Generalization by Diversifying Tasks and Instructions

Example of a Recipe Generation Task for LLMs

To increase diversity in fine-tuning data, Large Language Models can be trained on procedural generation tasks defined by specific instructions. For instance, a model can be prompted with "Show me a recipe for making ice cream." The expected response would detail the specific ingredients, such as heavy cream, milk, and sugar, followed by a numbered list of sequential steps to complete the recipe.

0

1

6 days ago

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.2 Generative Models - Foundations of Large Language Models

Computing Sciences

Related
  • Multi-Task Capability through Diverse Fine-Tuning Datasets

  • Modern Focus of Instruction Fine-Tuning Datasets

  • Using Diverse Data to Steer LLM Specialization

  • Examples of Instruction-Following Tasks in SFT Datasets

  • A development team has fine-tuned a large language model to be a helpful assistant. They observe that the model excels at summarizing technical documents and answering direct factual questions, which were the primary tasks in its fine-tuning dataset. However, when users ask it to perform more creative tasks like writing a short poem or brainstorming marketing slogans, the model's performance is poor and generic. Which of the following strategies would be the most effective next step to improve the model's ability to handle this wider range of user requests?

  • Using Varied Instructions for a Single Task to Enhance Data Diversity

  • Improving a Customer Service Chatbot's Robustness

  • Characteristics and Limitations of Early Instruction Fine-Tuning Datasets

  • Evaluating a Fine-Tuning Strategy for LLMs

  • Example of a Recipe Generation Task for LLMs

  • Example of a Creative Writing Task for LLMs

  • Example of a Math Word Problem Task for LLMs