Activity (Process)

Enhancing LLM Faithfulness and Robustness via Prompting

To improve an LLM's robustness against inaccurate retrieved texts, one can design prompts that explicitly instruct the model to be more faithful to facts. Such prompts can also empower the LLM to abstain from answering a question if the provided information is deemed insufficient or incorrect, thus preventing the generation of unsupported claims.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences