Experimentation 1 - Interpreting and Debugging NLP Models (10/5/2021)
Content
- Neural NLP model debugging methods
- Probing, attribution techniques, and interpretable evaluation
- Recommended Reading: Analysis in NLP Survey, to the end of Section 3 (Belinkov and Glass 2018)
- Reference: T5: Larger Models are Better (Raffel et al. 2020)
- Reference: Scaling Laws for Neural Language Models (Kaplan et al. 2020)
- Reference: Train Large, then Distill (Li et al. 2020)
- Reference: compare-mt (Neubig et al. 2019)
- Reference: ExplainaBoard (Liu et al. 2021)
- Reference: Local Perterbations (Ribeiro et al. 2016)
- Reference: Gradient-based Explanations (Ancona et al. 2018)
- Reference: Edge Probing (Tenney et al. 2019)
- Reference: Control Tasks for Probing (Hewitt et al. 2019)
- Reference: Information Theoretic Probing (Voita et al. 2020)
Slides: Interpretaion Slides