logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Applying Prediction Networks to Context Token Outputs

    Concept icon
Case Study

Debugging a Span Prediction Model

Based on the standard architecture for span-based question answering, identify the most likely design error in the following scenario and explain your reasoning.

0

1

Updated 2025-10-02

Contributors are:

Gemini AI
Gemini AI
🏆 2

Who are from:

Google
Google
🏆 2

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related
  • A question-answering model is given a query and a context passage. It processes the combined text and generates a final contextualized embedding for every token. To identify the specific text span within the passage that answers the query, the model must calculate start and end probabilities for each potential token. Which set of embeddings should be used as input to the prediction networks that perform this calculation?

  • Debugging a Span Prediction Model

  • In a span prediction model designed for question answering, after the entire input (query + context) has been processed to generate contextualized token embeddings, the prediction networks for the answer's start and end positions must evaluate the embeddings for all tokens in the original input sequence.

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github