Learn Before
Question-Answering Inference
Question-answering inference is a text-pair classification task focused on validating if a given answer is appropriate for a specific question. The model processes the question and the potential answer as a text pair to determine their correspondence.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.1 Pre-training - Foundations of Large Language Models
Related
Grounded Commonsense Inference
Question-Answering Inference
Natural Language Inference
Sentence Textual Similarity (STS) and Semantic Equivalence
Illustration of BERT for Text-Pair Tasks (Classification and Regression)
An NLP model is tasked with evaluating the following pair of sentences:
Premise: 'The athlete won the gold medal after years of dedicated training.' Hypothesis: 'The athlete is successful.'
The model must determine if the hypothesis logically follows from the premise. Which specific type of text-pair classification problem does this scenario best exemplify?
BERT Input Format for Sentence Pairs
End-to-End Pipeline for Text-Pair Classification
A language model is being used to determine if a product review and a one-sentence summary of that review are semantically equivalent. Arrange the following steps into the correct sequence for how the model processes this text pair to produce a classification.
Duplicate Question Detection on a Q&A Forum
Learn After
Automated Fact-Checking for Customer Support
A machine learning engineer is fine-tuning a transformer-based model to determine if a given answer is a valid response to a specific question. The model has been pre-trained and uses special tokens for classification tasks. Which of the following input formats should the engineer use to correctly structure the data for this task?
A language model is fine-tuned for a task where it must determine if a given answer is a valid response to a question. The model is trained on a large dataset of direct question-answer pairs (e.g., Q: 'Who wrote Hamlet?', A: 'William Shakespeare'). When tested, the model correctly identifies direct answers but incorrectly classifies the following pair as a 'poor match':
- Question: 'What is the primary cause of tides?'
- Answer: 'The primary cause of tides is not wind patterns.'
What is the most likely reason for this misclassification?