Multiple Choice

Consider a language model that uses a standard self-attention mechanism but lacks any method for encoding word positions. The model is given two distinct input sentences:

Sentence 1: 'A dog chases a cat.' Sentence 2: 'A cat chases a dog.'

After these sentences pass through a single self-attention layer, how would the final output representation for the word 'chases' compare between the two sentences?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Data Science

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science