Modeling Long Sequences (4/27/2021)
- Extracting Features from Long Sequences
- Models of Coreference
- Reference: RNN Language Models (Mikolov et al 2010)
- Reference: Larger Context RNNLMs (Mikolov and Zweig 2012)
- Reference: Self Attention over Previous Sentence (Voita et al. 2018)
- Reference: Self Attention over Previous Vectors (Dai et al. 2019)
- Reference: Compressive Transformer (Lillicrap et al. 2019)
- Reference: Sparse Transformers (Child et al. 2019)
- Reference: Adaptive Span Transformer (Sukhbaatar et al. 2019)
- Reference: Adaptively Sparse Transformers (Correia et al. 2019)
- Reference: Reformer (Kitaev et al. 2020)
- Reference: Linformer (Wang et al. 2020)
- Reference: Nystromformer (Xiong et al. 2021)
- Reference: Evaluation: Sentence Scrambling (Barzilay and Lapata 2008)
- Reference: Evaluation: Final Sentence Prediction (Mostafazadeh et al. 2016)
- Reference: Evaluation: Final Word Prediction (Paperno et al. 2016)
- Reference: Long Range Arena (Tay et al. 2020)
- Reference: End-to-end Neural Coreference Resolution (Lee et al. 2017)
- Reference: Deep Reinforcement Learning for Entity Ranking (Clark and Manning 2016)
- Reference: Entity-level Representations (Clark and Manning 2016)
- Reference: Global Features for Coreference (Wiseman et al. 2016)
- Reference: Anaphoricity and Antecedent Features (Wiseman et al. 2015)
- Reference: Coref, success and challenges (Ng 2016)
- Reference: Discourse-driven LMs (Peng and Roth 2016)
- Reference: 15 Years in Co-reference (Ng 2010)
- Reference: Sentence-level LSTMs for Script Inference (Pichotta and Mooney 2016)
- Reference: Easy Victories and Uphill Battles (Durrett and Klein 2013)
- Reference: Solving Hard Coreference Problems (Peng et al. 2015)
- Reference: Entity-centric Coref (Clark and Manning 2015)
- Reference: Modular Entity-centric Model (Haghighi and Klein 2010)
- Reference: Reference-aware Language Models (Yang et al. 2017)
- Reference: Reference-aware QA Models (Dhingra et al. 2017)
- Reference: Representation Learning for Text-level Discourse Parsing (Ji and Eisenstein 2014)
- Reference: Recursive Deep Models for Discourse (Li et al. 2014)
- Reference: Attention-based Hierarchical Discourse (Li et al. 2016)
- Reference: Representation Learning for Text-level Discourse (Ji and Eisenstein 2014)
- Reference: Pay Attention to the Ending (Cai et al. 2017)
- Reference: Discourse Language Models (Chaturvedi et al. 2017)
- Reference: Adversarial Implicit Discourse Relation Classification (Qin et al. 2017)
- Reference: Discourse Structure for Text Categorization (Ji and Smith 2017)
Slides: Document-level Processing Slides
Video: Document-level Processing Video