Learn Before
Text Simplification as a Sequence-to-Sequence Task
Prompt simplification can be framed as a sequence-to-sequence learning problem. This method requires a labeled dataset for text simplification, which is then used to train an encoder-decoder model. The trained model learns to transform a complex input text into its simplified equivalent.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.1 Pre-training - Foundations of Large Language Models
Ch.3 Prompting - Foundations of Large Language Models
Related
Simplifying Task Descriptions by Deletion
Simplifying Task Descriptions by Paraphrasing
Using LLMs for Prompt Simplification
Example of a Detailed Task Description for a Healthcare and Finance QA Model
A developer is trying to make their instructions for a language model more efficient. Below is their original set of instructions. Which of the following revisions best simplifies the text while preserving all the essential constraints and the core intent of the original request?
Original Instructions: "I would be very grateful if you could please function as an expert travel planner. Your task is to create a comprehensive and detailed seven-day travel itinerary for a family of four, which includes two adults and two teenagers, who will be visiting the city of Paris for the very first time. It is important that the itinerary focuses on their interests in art, history, and food. Additionally, please make sure to incorporate some engaging and fun activities that would appeal specifically to the teenagers in the group."
Analyzing a Failed Prompt Simplification
Text Simplification as a Sequence-to-Sequence Task
Heuristic-Based Deletion for Prompt Simplification
Applying Prompt Simplification
Learn After
Troubleshooting a Text Simplification Model
A team is developing a model to simplify complex medical jargon into plain language for patients. They have successfully trained an encoder-decoder model on a large dataset of medical text and its simplified version. However, when they test the model, they find it frequently produces outputs that are grammatically correct and simple, but factually inaccurate (e.g., changing 'benign tumor' to 'harmless growth' but 'malignant tumor' to 'minor lump'). What is the most likely cause of this specific type of failure?
You are tasked with creating a text simplification tool using a sequence-to-sequence learning approach. Arrange the following core steps in the correct chronological order, from initial data preparation to generating a final output.