Multi-Task Capability through Diverse Fine-Tuning Datasets
A Large Language Model can be fine-tuned to handle multiple Natural Language Processing (NLP) tasks simultaneously by training it on a dataset that includes instructions and corresponding outputs from a variety of different problems.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Multi-Task Capability through Diverse Fine-Tuning Datasets
Modern Focus of Instruction Fine-Tuning Datasets
Using Diverse Data to Steer LLM Specialization
Examples of Instruction-Following Tasks in SFT Datasets
A development team has fine-tuned a large language model to be a helpful assistant. They observe that the model excels at summarizing technical documents and answering direct factual questions, which were the primary tasks in its fine-tuning dataset. However, when users ask it to perform more creative tasks like writing a short poem or brainstorming marketing slogans, the model's performance is poor and generic. Which of the following strategies would be the most effective next step to improve the model's ability to handle this wider range of user requests?
Using Varied Instructions for a Single Task to Enhance Data Diversity
Improving a Customer Service Chatbot's Robustness
Characteristics and Limitations of Early Instruction Fine-Tuning Datasets
Evaluating a Fine-Tuning Strategy for LLMs
Example of a Recipe Generation Task for LLMs
Example of a Creative Writing Task for LLMs
Example of a Math Word Problem Task for LLMs
Learn After
Evaluating Multi-Task Fine-Tuning Strategies for AI Assistants
Developing a Multi-Function Customer Service AI
A development team is building a single language model intended to serve as a versatile corporate assistant. The model must be able to summarize internal reports, answer questions based on a company knowledge base, and draft professional emails. After an initial training phase, the team observes that the model is excellent at drafting emails but performs poorly on summarization and question-answering. Which of the following adjustments to their training process is most likely to create a single model that is proficient in all three tasks?