Concept

Gemma-7B

Gemma-7B is a large language model pre-trained on a massive dataset of 6 trillion tokens. Its pre-training data is sourced from webpages, mathematics content, and code.

0

1

Updated 2026-04-21

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related