Multiple Choice

A team is developing a system to generate high-quality summaries of news articles. They are considering two different approaches for combining the outputs of several text-generation models:

  • Approach 1: Combine the outputs of 10 models. All 10 models are based on the same underlying architecture and were trained on slightly different subsets of the same massive news corpus.
  • Approach 2: Combine the outputs of 3 models. Each model has a different architecture, and each was trained on a distinct type of text data (one on formal reports, one on opinion blogs, and one on encyclopedic articles).

Which approach is more likely to produce a consistently better and more reliable summary, and what is the most accurate reason for its superiority?

0

1

Updated 2025-10-01

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science