Modeling 2 - Conditioned Generation (9/13/2022)
Content:
- Encoder-Decoder Models
- Conditioned Generation and Search
- Ensembling
- Evaluation
- Types of Data to Condition On
Reading Material
- Recommended Reading: Neural Machine Translation and Sequence-to-Sequence Models Chapter 7
- Reference: Recurrent Neural Translation Models (Kalchbrenner and Blunsom 2013)
- Reference: LSTM Encoder-Decoders (Sutskever et al. 2014)
- Reference: On NMT Search Errors and Model Errors: Cat Got Your Tongue? (Stahlberg and Byrne. 2019)
- Reference: The Curious Case of Neural Text Degeneration (Holtzman et al. 2019)
- Reference: Quality Aware Decoding (Fernandes et al. 2022)
- Tool: Quality Aware Decoding
- Reference: Parameter Averaging (Bahar et al. 2017)
- Reference: Knowledge Distillation (Kim et al. 2016)
- Link: WMT Shared Tasks
- Reference: Meena Chatbot (Adiwardana et al. 2020)
- Reference: Generation from Images (Karpathy and Li 2015)
- Reference: Generation from Structured Data (Wen et al. 2015)
- Reference: Challenges in Data-to-document Generation (Wisemen et al. 2017)
- Reference: Generation from Input+Tags (Sennrich et al. 2016)
- Reference: Generation from TED Talk Metadata (Hoang et al. 2016)
- Reference: WMT Transalation Tasks
- Reference: GENIE Leaderboard
- Reference: BLEU (Papineni et al. 2002)
- Reference: BertScore (Zhang et al. 2020)
- Reference: BLEURT (Sellam et al. 2020)
- Reference: COMET (Rei et al. 2020)
- Reference: PRISM (Thompson and Post 2020)
- Reference: BARTScore (Yuan et al. 2021)
- Reference: WMT Metrics Shared Task (Mathur et al. 2020)
- Reference: Re-evaluating Evaluation in Text Summarization (Bhandari et al. 2020)
Slides: Conditioned LM Slides
Sample Code: Conditioned LM Code Examples