logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Examples of Pre-trained Transformers by Architecture

    Concept icon
Reference

BART

Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of ACL. 7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703

0

1

Updated 2022-05-26

Contributors are:

Adam Nik
Adam Nik
🏆 1

Who are from:

Carleton College
Carleton College
🏆 1

Tags

Data Science

Related
  • BERT

  • BART

  • T5

  • BERT (Bidirectional Encoder Representations from Transformers)

    Concept icon
  • RoBERTa

    Concept icon
  • GPT Series

    Concept icon
  • LLaMA2

    Concept icon
  • DeepSeek-V3

    Concept icon
  • Falcon

    Concept icon
  • Mistral

    Concept icon
  • PaLM-450B

    Concept icon
  • Gemma-7B

    Concept icon
  • Gemma2

    Concept icon
  • A software development team is tasked with building a feature that can automatically generate a concise, one-paragraph summary from a long news article. The system needs to first comprehend the full context of the source article and then generate a new, coherent summary. Based on the typical strengths of different foundational model designs, which of the following models would be the most suitable choice for this specific task?

  • Match each pre-trained model with the description that best fits its architectural design and primary use case.

  • Evaluating Model Architecture Selection for a Classification Task

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github