Troubleshooting a Factual Chatbot's Output
A team develops a chatbot intended to provide precise, factual answers for a customer support knowledge base. During testing, they observe that while the chatbot is fluent, it frequently provides answers that are imaginative and sometimes factually incorrect. Upon reviewing the model's text generation configuration, they find the temperature parameter is set to 1.5. Based on this information, analyze the likely cause of the chatbot's undesirable behavior and explain the reasoning behind your conclusion.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Token Sampling from a Conditional Probability Distribution
Temperature-Scaled Softmax for Renormalized Probability
A language model has calculated the following raw scores (logits) for the next potential token:
{'mat': 3.0, 'rug': 2.5, 'chair': 2.0, 'moon': -1.0}. To control the randomness of the output, a temperature parameter is applied to these scores before they are converted into a final probability distribution for sampling. Which of the following probability distributions most likely resulted from applying a low temperature (e.g., a value less than 1.0)?Troubleshooting a Factual Chatbot's Output
You are configuring a text generation model for different tasks. Match each task with the description of the temperature setting that would be most appropriate to achieve the desired output.