Learn Before
Concept

Input Embeddings in LLMs

In Large Language Models, symbolic input tokens are not directly processed. Instead, each token is converted into a numerical representation called an embedding. These embeddings, learned by a token embedding model, serve as the actual input that the core LLM operates on.

0

1

Updated 2025-10-07

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related