Example of a Multi-Turn Conversation for LLM Fine-Tuning
A sample multi-turn conversation illustrates how a Large Language Model maintains context across turns. The dialogue proceeds as follows:
- User: 'Who won the FIFA World Cup 2022?'
- Assistant: 'Argentina won the FIFA World Cup 2022.'
- User: 'Where was it held?'
- Assistant: 'The 2022 FIFA World Cup was held in Qatar.'
- User: 'How many times has Argentina won the World Cup?'
- Assistant: 'Argentina has won the FIFA World Cup three times.'
For the model to understand that 'it' in the second question refers to the 2022 World Cup, it must consider the previous turn's context.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Related
Example of a Multi-Turn Conversation for LLM Fine-Tuning
A development team is preparing two separate datasets to fine-tune a language model. The first dataset is for a tool that summarizes individual, self-contained documents. The second is for a conversational assistant designed to help users troubleshoot problems over several back-and-forth exchanges. Which statement best analyzes the fundamental difference in how the input data must be structured for these two tasks?
Diagnosing a Conversational AI Fine-Tuning Issue
Adapting a Language Model for Different Conversational Tasks
Learn After
A language model is being tested for its conversational abilities. Below is a transcript of an interaction. Analyze the assistant's final response. What is the most likely reason for this conversational failure?
User: 'What are the main ingredients in a classic Margherita pizza?' Assistant: 'The main ingredients are San Marzano tomatoes, fresh mozzarella cheese, fresh basil, salt, and extra-virgin olive oil.' User: 'What's a good substitute for them?' Assistant: 'A good substitute for what? Please specify what you would like a substitute for.'
Constructing Conversational Context
Evaluating Training Data for Conversational AI