Example of a Math Word Problem Task for LLMs
A simple arithmetic word problem can serve as a distinct instruction task to diversify the training data for Large Language Models. For instance, a model might be prompted with the following mathematical question: "If you buy apples and each apple costs $1.20, how much do you spend in total?" The expected output for this prompt is the direct, correct numerical answer, which is $6.00.
0
1
Tags
Foundations of Large Language Models
Ch.2 Generative Models - Foundations of Large Language Models
Computing Sciences
Related
Multi-Task Capability through Diverse Fine-Tuning Datasets
Modern Focus of Instruction Fine-Tuning Datasets
Using Diverse Data to Steer LLM Specialization
Examples of Instruction-Following Tasks in SFT Datasets
A development team has fine-tuned a large language model to be a helpful assistant. They observe that the model excels at summarizing technical documents and answering direct factual questions, which were the primary tasks in its fine-tuning dataset. However, when users ask it to perform more creative tasks like writing a short poem or brainstorming marketing slogans, the model's performance is poor and generic. Which of the following strategies would be the most effective next step to improve the model's ability to handle this wider range of user requests?
Using Varied Instructions for a Single Task to Enhance Data Diversity
Improving a Customer Service Chatbot's Robustness
Characteristics and Limitations of Early Instruction Fine-Tuning Datasets
Evaluating a Fine-Tuning Strategy for LLMs
Example of a Recipe Generation Task for LLMs
Example of a Creative Writing Task for LLMs
Example of a Math Word Problem Task for LLMs