logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Text Generation from an Initial Context

Relation

Stochastic decoding methods in TGM

Selecting random token based on the probability of previous token. There are two types

  • top-k sampling
  • top-p (or nucleus) sampling

0

1

Updated 2025-10-10

Contributors are:

Anju Manoj
Anju Manoj
🏆 1
Gemini AI
Gemini AI
✔️ 1

Who are from:

San Jose State University
San Jose State University
🏆 1
Google
Google
✔️ 1

References


  • Automatic Detection of Machine Generated Text: A Critical Survey

  • Reference of Foundations of Large Language Models Course

Tags

Data Science

Foundations of Large Language Models Course

Computing Sciences

Related
  • Examples of text generation

  • Decoding Methods to Generate Continuations in TGM

  • Stochastic decoding methods in TGM

  • Simultaneous Processing of Input Context Tokens

    Concept icon
  • Building the Encoded Representation of Input

  • A user gives a language model the input: "Ancient Rome was a civilization known for its". The model then produces the following output: "engineering marvels, such as aqueducts and roads." Based on the two-stage process of text generation, which statement best analyzes this interaction?

  • Arrange the following stages into the correct sequence that describes how a language model generates text based on an initial input.

  • Analyzing a Code Generation Scenario

Learn After
  • Top-k Sampling

    Concept icon
  • Top-p (Nucleus) Sampling

    Concept icon
  • A team developing a language model for creative storytelling finds that its generated text is often repetitive and predictable, frequently getting stuck in loops (e.g., 'I am I am I am...'). Which of the following decoding strategies would be most effective at addressing this issue by introducing more variety into the generated text?

  • Analyzing Text Generation Outputs

  • Comparing Text Generation Strategies

  • When using a stochastic decoding method for text generation, the model is guaranteed to select the single token with the highest probability at each step.

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github