Learn Before
Example

Example of Alignment: Avoiding Harmful Requests

A practical example of alignment is preventing a Large Language Model from assisting with harmful activities, such as providing instructions on how to build a weapon. While an unaligned model might fulfill this prompt, a responsible, well-aligned model will recognize the danger and refuse to supply illegal or harmful information, ensuring it operates according to ethical guidelines.

0

1

Updated 2026-05-01

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Related