Activity (Process)

Ensembling Small Models in LLMs

In the context of Large Language Models (LLMs), ensemble learning can be straightforwardly applied to build a strong overall model by combining multiple weak models. This involves aggregating the probability distributions predicted by multiple small models or specialized submodels to derive a final prediction. Common techniques used for this aggregation step include majority voting, weighted averaging, or stacking.

0

1

Updated 2026-05-01

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences