Reframing Numerical Scoring as Text Generation
In the context of large language models, numerical scoring tasks, such as evaluating translation quality, can be reframed as text generation problems. Instead of training the model to compute and output a continuous numerical value, the task is redefined so that the model generates a sequence of text characters that represent the numerical score, such as the string "".
0
1
Tags
Foundations of Large Language Models
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Example of Reframing Text Classification as Text Generation
Instruction-based Prompts
Few-Shot Learning
Alternative Prompt Formats for Machine Translation
Text Classification in NLP
Versatility of Prompt Templates
Grammaticality Judgment as a Binary Classification Task for LLMs
Formal Definition of LLM Inference
Illustrative Purpose of Prompting Examples
The paradigm of using Large Language Models (LLMs) allows for many different NLP tasks (e.g., translation, sentiment analysis) to be reframed as a text generation problem. What is the fundamental advantage of this approach over traditional methods that required building a separate, specifically trained model for each individual task?
Reframing a Traditional NLP Task
Choosing an NLP Development Strategy
Classification via Prompt Completion
Reframing Numerical Scoring as Text Generation