Learn Before
Short Answer

Model-Dependent Prompt Performance

A team of developers has perfected a prompt that consistently generates high-quality code snippets using a specific large language model. When they switch to a new, more advanced model from a different provider, they find the same prompt produces less reliable results. Explain the most likely underlying reason for this decrease in performance, even though the new model is considered more powerful overall.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science