Concept

Insufficiency of Prompting for Tasks Outside an LLM's Pre-training Scope

The effectiveness of prompting is fundamentally constrained by the knowledge an LLM acquired during pre-training. If a task requires knowledge from a domain the model was not trained on, such as a specific language, even sophisticated prompt engineering will fail to produce good results. The appropriate remedy in this situation is to augment the model's training with data from the target domain.

0

1

Updated 2026-04-29

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences