Applications 2 - Bias and Fairness (10/25/2022)
- Computational Social Science
- Types of Bias in NLP Models
- How to Prevent Bias in NLP
References
- Highly Recommended Reading: Language (Technology) is Power: A Critical Survey of “Bias" in NLP (Blodgett et al. 2020)
- Reference: Survey of Race and Racism in NLP
- Reference: Ethics of Persuasive Technology (Berdichevsky and Neuenschwander, 1999)
- Reference: Measuring and Mitigating Unintended Bias in Text Classification (Dixon et al. 2017)
- Reference: Counterfactual Thought (Byrne 2016)
- Reference: Norms in Counterfactual Selection (Fazelpour 2020)
- Reference: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings (Bolukbasi et al. 2016)
- Reference: Semantics derived automatically from language corpora contain human-like biases (Caliskan et al. 2017)
- Reference: On Measuring Social Biases in Sentence Encoders (May et al. 2019)
- Reference: Racial disparities in automated speech recognition (Koenecke et al. 2020)
- Reference: State and Fate of Linguistic Diversity (Joshi et al. 2020)
- Reference: Systematic Disparities in Language Technology Performance (Blasi et al. 2021)
- Reference: Gender Bias in Coreference Resolution (Rudinger et al. 2018)
- Reference: Learning Fair Representations (Zemel et al. 2013)
- Reference: Unsupervised Domain Adaptation by Backpropagation (Ganin and Lempitsky 2015)
- Reference: Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them (Gonen and Goldberg 2019)
- Reference: Learning The Difference That Makes A Difference With Counterfactually-Augmented Data (Kaushik et al. 2020)
- Reference: Explaining the Efficacy of Counterfactually Augmented Data (Kaushik et al. 2021)
- Reference: Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem (Saunders and Byrne 2020)
- Reference: Towards Controllable Biases in Language Generation (Sheng et al. 2020)
- Reference: Gender as a Variable in Natural-Language Processing: Ethical Considerations (Larson 2017)
- Reference: Do Artifacts Have Politics? (Winner 1980)
- Reference: The Trouble With Bias (Crawford 2017)
- Reference: Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview (Shah et al. 2020)
- Reference: Moving beyond “algorithmic bias is a data problem" (Hooker 2021)
- Reference: Fairness and Machine Learning (Barocas et al. 2019)
- Reference: Computational Social Science ≠ Computer Science + Social Data (Wallach 2018)
- Reference: Manipulative Tactics in Political Emails from the 2020 U.S. Election (Mathur et al. 2020)
- Reference: Text as Data (Grimmer and Stewart 2012)
- Reference: Agenda Setting in Russian News (Field et al. 2018)
- Reference: Diachronic Word Embeddings (Hamilton et al. 2016)
- Reference: Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes (Garg et al. 2017)
Slides: Bias/Fairness Slides