Learn Before
Multiple Choice

A development team is fine-tuning a language model for a specialized task. They observe two distinct outcomes from their experiments:

  1. Using only discrete, human-written instructions results in outputs that correctly follow a required format but lack contextual subtlety.
  2. Using only learnable, continuous vectors as guidance produces more subtle and context-aware outputs, but these outputs frequently deviate from the required format.

Based on these observations, which of the following strategies would be most effective for creating a model that produces outputs that are both structurally correct and contextually subtle?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Data Science

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science