Classes
- Fall 2024: Advanced NLP (CS11-711 @ CMU)
- Spring 2024: Advanced NLP (CS11-711 @ CMU)
- Fall 2022: Advanced NLP (CS11-711 @ CMU)
- Spring 2022: Multilingual NLP (CS11-737 @ CMU)
- Fall 2021: Advanced NLP (CS11-711 @ CMU)
- Spring 2021: Neural Networks for NLP (CS11-747 @ CMU)
- Fall 2020: Multilingual NLP (CS11-737 @ CMU)
- Spring 2020: Neural Networks for NLP (CS11-747 @ CMU)
- Fall 2019: Machine Translation and Sequence-to-sequence Models (CS11-731 @ CMU)
- Spring 2019: Neural Networks for NLP (CS11-747 @ CMU)
- Fall 2018: Machine Translation and Sequence-to-sequence Models (CS11-731 @ CMU)
- Spring 2018: Neural Networks for NLP (CS11-747 @ CMU)
- Fall 2017: Neural Networks for NLP (CS11-747 @ CMU)
- Spring 2017: Machine Translation and Sequence-to-sequence Models (CS11-731 @ CMU)
- 2015: Sequential Data Modeling (@NAIST; 2 sessions)
- 2014: Sequential Data Modeling (@NAIST; 2 sessions)
- 2014: Speech Information Processing (@Kansai University; 2 sessions)
- 2013: Advanced Research Seminar: Machine Translation
- 2013: Sequential Data Modeling (@NAIST; Structured Perceptron, Conditional Random Fields)
- 2013: Speech Information Processing (@Kansai University; 2 sessions)
- 2012: Intelligent System Design (@NAIST; Machine Translation)
Textbooks
I've (co-)written a number of textbooks on technical topics (mostly in Japanese at the moment).
Machine Translation Corona-sha, 2014 |
Natural Language Processing: Basics and Technology Shoeisha, 2016 |
Iwanami Data Science (Volume 2: Natural Language Processing) Iwanami, 2016 |
Older Materials
Practical Neural Networks for NLP: From Theory to Code
A tutorial done by Chris Dyer, Yoav Goldberg, and me at EMNLP 2016. It covers how to turn your NLP ideas into code for neural network models, with a focus on our toolkit DyNet.
- Part 1: Neural Nets, Recurrent Nets, and More
- Part 2: More Complicated RNNs, Tree Structured Networks, Structured Prediction
NLP Programming Tutorial
This is a tutorial I used to do at NAIST for people to start learning how to program basic algorithms for natural language processing. You should need very little programming experience to start out, but each of the tutorials builds on the stuff from the previous tutorials, so it is highly recommended that you do them in order. You can download the slides and data for the practice exercises from github.
- Tutorial 0: Programming Basics
- Tutorial 1: Unigram Language Models
- Tutorial 2: Bigram Language Models
- Tutorial 3: Word Segmentation
- Tutorial 4: Part-of-Speech Tagging with Hidden Markov Models
- Tutorial 5: The Perceptron Algorithm
- Tutorial 6: Advanced Discriminative Training
- Tutorial 7: Neural Networks
- Tutorial 8: Recurrent Neural Networks
- Tutorial 9: Topic Models
- Tutorial 10: Phrase Structure Parsing
- Tutorial 11: Dependency Parsing
- Tutorial 12: Structured Perceptron
- Tutorial 13: Search Algorithms
- Bonus 1: Kana-Kanji Conversion for Japanese Input
Machine Translation
Note that the following are now (as of August 2022) very out of date, I'm keeping them here for historical purposes.
Tips on Building Neural Translation Systems
I wrote a fairly extensive tutorial on some of the things that you need to do to make a good neural machine translation system (circa July 2016).
Building a Phrase-Based Machine Translation System
This tutorial by (by me and Kevin) covers the many steps that are involved in building a phrase-based machine translation system, specifically what goes on when training a system using Moses. For each of the many steps, it describes the processing that occurs, open source tools that can be used to do this porcessing, and some of the existing research problems.
Bayesian Non-Parametrics Tutorial
This is a simple tutorial about non-parametric Bayesian techniques consisting of several parts. The first part discusses the basic motivations and theory behind the use of Bayesian non-parametrics. The second part demonstrates how to implement unsupervised part of speech induction using Gibbs sampling and the Bayesian HMM, followed by an explanation of how to go from the finite HMM to the infinite HMM. Finally, the tutorial covers some more advanced topics and applications proposed in the recent literature.
Other Presentations and Useful Information
- Academic Paper Style Guide: Tips on writing style for when you first start writing an academic paper.
- Lattice and Hypergraph MERT: An introduction of two papers on minimum error rate training over hypergraphs of lattices.