Concept

Bengio et al. (2003) Feed-Forward Neural Language Model

A widely cited and foundational study in neural language modeling was conducted by Bengio et al. in 2003. This work introduced a model that used a feed-forward neural network to estimate n-gram probabilities. The network was trained in an end-to-end fashion, and a key by-product of this process was the creation of distributed representations for words, which came to be known as word embeddings. This model was instrumental in demonstrating how neural networks could overcome the limitations of traditional statistical language models.

0

1

Updated 2026-04-18

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences