Multiple Choice

A developer is testing two prompts for a text summarization task.

  • Prompt 1 results in a summary with a very high log-likelihood score from the model, but human evaluators rate the summary as 'poor' because it misses key points.
  • Prompt 2 results in a summary with a lower log-likelihood score, but human evaluators rate the summary as 'excellent' because it accurately captures all key points.

Based on this scenario, what is the most accurate conclusion about evaluating these prompts?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science