Natural language processing (NLP) aims to enable computers to use human languages – so that people can, for example, interact with computers naturally; or communicate with people who don't speak a common language; or access speech or text data at scales not otherwise possible. The NLP group at Notre Dame is interested in all aspects of NLP, with a focus on machine translation.
Unsupervised multilingual language learning Models and algorithms for translation, word alignment, and bilingual lexicon induction from parallel and non-parallel texts. Sponsored by DARPA LORELEI and a Google Faculty Research Award.
Neural networks for machine translation Models and algorithms for translation and language modeling using neural networks. Sponsored by an Amazon Academic Research Award and a Google Faculty Research Award.
Documenting endangered languages Technologies for large-scale data collection and automatic transcription and word alignment in endangered and unwritten languages. Sponsored by the National Science Foundation.
Improving Lexical Choice in Neural Machine Translation. Toan Q. Nguyen and David Chiang, 2017. [PDF]
Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation. Toan Q. Nguyen and David Chiang, 2017. In Proc. IJCNLP. [PDF]
Top-rank enhanced listwise optimization for statistical machine translation. Huadong Chen, Shujian Huang, David Chiang, Xin-Yu Dai, and Jiajun Chen. CoNLL 2017. [PDF]
Improved neural machine translation with a syntax-aware encoder and decoder. Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. ACL 2017. [PDF]
Spoken term discovery for language documentation using translations. Workshop on Speech-Centric Natural Language Processing, 2017.
Decoding with finite-state transducers on GPUs. Arturo Argueta and David Chiang. EACL 2017. [PDF]
DyNet: The Dynamic Neural Network Toolkit. Graham Neubig et al. 2017. [PDF]
A case study on using speech-to-translation alignments for language documentation. Antonios Anastasopoulos and David Chiang. Second Workshop on Computational Methods for Endangered Languages, 2017. [PDF]
Probabilistic neural programs. Kenton W. Murray and Jayant Krishnamurthy. NIPS Workshop on Neural Abstract Machines and Program Induction, 2016. [PDF]
An unsupervised probability model for speech-to-translation alignment of low-resource languages. Antonios Anastasopoulos, Long Duong, and David Chiang. EMNLP 2016. [PDF]
An attentional model for speech translation without transcription. Long Duong, Antonios Anasatasopoulos, Trevor Cohn, Steven Bird, and David Chiang. NAACL HLT 2016. [PDF]
Auto-sizing neural networks: with applications to n-gram language models. Kenton Murray and David Chiang. EMNLP 2015. [PDF]
Supervised phrase table triangulation with neural word embeddings for low-resource languages. Tomer Levinboim and David Chiang. EMNLP 2015. [PDF]
Multi-task word alignment triangulation for low-resource languages.
Tomer Levinboim and David Chiang. NAACL HLT 2015. [PDF]
Model invertibility regularization: sequence alignment with or without parallel data.
Tomer Levinboim, Ashish Vaswani, David Chiang. NAACL HLT 2015. [PDF][code]
Talk: Karl Stratos (Toyota Technological Institute at Chicago)
Talk: Kyunghyun Cho (New York University)
Talk: Taylor Berg-Kirkpatrick (Carnegie Mellon University)