Natural language processing (NLP) aims to enable computers to use human languages – so that people can, for example, interact with computers naturally; or communicate with people who don't speak a common language; or access speech or text data at scales not otherwise possible. The NLP group at Notre Dame is interested in all aspects of NLP, with a focus on machine translation and connections with formal language theory.

The NLP group co-sponsors NL+, the Natural Language Processing Lunch Seminar.

Current Members

  • Associate professor
    translation, syntax, formal language theory, programming languages
  • PhD student
    generation from language models, probabilistic models
  • PhD student
    NLP for classical languages, computational historical linguistics
  • PhD student
    retrieval-augmented translation
  • PhD student
    dialects, noisy text, zero-shot transfer
  • PhD student
    computational linguistics, language documentation, syntax
  • Andy Yang
    PhD student
    neural network expressivity, formal logic

Former Members


Expressivity of neural sequence models Relating neural sequence models to automata, grammars, circuits, and logics. Collaboration with Peter Cholak and Anand Pillay.
Retrieval-augmented neural machine translation Augmenting neural machine translation systems by retrieving and using data beyond parallel text. Collaboration with Meng Jiang. Sponsored by NSF.
Natural language (variety) processing Collaboration with Antonis Anastaspoulos (GMU) and Yulia Tsvetkov (UW). Sponsored by NSF.
Language documentation with an AI helper Collaboration with Antonis Anatasopoulos and Geraldine Walther (GMU). Sponsored by NSF.
Differentiable, probabilistic programming with recursive structured models Collaboration with Chung-chieh Shan (IU). Sponsored by NSF.
NLP on medieval texts Analysis of Latin texts and language modeling for OCR of Latin manuscsripts. Collaborations with Walter Scheirer and Hildegund Müller. Sponsored by Notre Dame FRSP.

Recent Publications

Andy Yang and David Chiang. Counting like transformers: compiling temporal counting logic into softmax transformers. arXiv:2404.04393. PDF BibTeX
Lena Strobl, Dana Angluin, David Chiang, Jonathan Rawski, and Ashish Sabharwal. Transformers as transducers. arXiv:2404.02040. PDF BibTeX
Fahim Faisal, Orevaoghene Ahia, Aarohi Srivastava, Kabir Ahuja, David Chiang, Yulia Tsvetkov, and Antonios Anastasopoulos. DIALECTBENCH: a NLP benchmark for dialects, varieties, and closely-related languages. 2024. arXiv:2403.11009. PDF BibTeX
Stephen Bothwell, Brian DuSell, David Chiang, and Brian Krostenko. PILA: a historical-linguistic dataset of Proto-Italic and Latin. In Proc. LREC-COLING. 2024. To appear. BibTeX
Chihiro Taguchi, Jefferson Saransig, Dayana Velásquez, and David Chiang. KILLKAN: the automatic speech recognition dataset for Kichwa with morphosyntactic information. In Proc. LREC-COLING. 2024. To appear. BibTeX
Dana Angluin, David Chiang, and Andy Yang. Masked hard-attention transformers and Boolean RASP recognize exactly the star-free languages. 2023. arXiv:2310.13897. PDF BibTeX
Lena Strobl, William Merrill, Gail Weiss, David Chiang, and Dana Angluin. What formal languages can transformers express? A survey. Transactions of the Association for Computational Linguistics, 2024. To appear. PDF BibTeX
Brian DuSell and David Chiang. Stack attention: improving the ability of transformers to model hierarchical patterns. In Proc. ICLR. 2024. PDF BibTeX
Stephen Bothwell, Justin DeBenedetto, Theresa Crnkovich, Hildegund Müller, and David Chiang. Introducing rhetorical parallelism detection: a new task with datasets, metrics, and baselines. In Proc. EMNLP, 5007–5039. 2023. doi:10.18653/v1/2023.emnlp-main.305. PDF BibTeX
Alexandra Butoi, Tim Vieira, Ryan Cotterell, and David Chiang. Efficient algorithms for recognizing weighted tree-adjoining languages. In Proc. EMNLP. 2023. PDF BibTeX
Aarohi Srivastava and David Chiang. BERTwich: extending BERT's capabilities to model dialectal and noisy text. In Findings of ACL: EMNLP. 2023. PDF BibTeX
Chihiro Taguchi, Yusuke Sakai, Parisa Haghani, and David Chiang. Universal automatic phonetic transcription into the International Phonetic Alphabet. In Proc. INTERSPEECH. 2023. doi:10.21437/Interspeech.2023-2584. PDF BibTeX
Alexandra Butoi, Ryan Cotterell, and David Chiang. Convergence and diversity in the control hierarchy. In Proc. ACL. 2023. PDF BibTeX
David Chiang, Peter Cholak, and Anand Pillay. Tighter bounds on the expressivity of transformer encoders. In Proc. ICML, 5544–5562. 2023. PDF BibTeX
Aarohi Srivastava and David Chiang. Fine-tuning BERT with character-level noise for zero-shot transfer to dialects and closely-related languages. In Proc. Workshop on NLP for Similar Languages, Varieties and Dialects. 2023. PDF BibTeX
Patrick Soga and David Chiang. Bridging graph position encodings for transformers with weighted graph-walking automata. Transactions on Machine Learning Research, 2023. PDF BibTeX
Brian DuSell and David Chiang. The surprising computational power of nondeterministic stack RNNs. In Proc. ICLR. 2023. PDF BibTeX
David Chiang, Colin McDonald, and Chung-chieh Shan. Exact recursive probabilistic programming. PACMPL, 2023. doi:10.1145/3586050. PDF BibTeX
Chihiro Taguchi and David Chiang. Introducing morphology in Universal Dependencies Japanese. In Proc. Workshop on Universal Dependencies, 65–72. 2023. PDF BibTeX
David Chiang, Alexander M. Rush, and Boaz Barak. Named tensor notation. Transactions on Machine Learning Research, 2023. PDF BibTeX
Darcey Riley and David Chiang. A continuum of generation tasks for investigating length bias and degenerate repetition. In Proc. BlackboxNLP. 2022. PDF BibTeX
Alexandra Butoi, Brian DuSell, Tim Vieira, Ryan Cotterell, and David Chiang. Algorithms for weighted pushdown automata. In Proc. EMNLP. 2022. PDF BibTeX
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, and others. Beyond the Imitation Game: quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. PDF BibTeX
David Chiang and Peter Cholak. Overcoming a theoretical limitation of self-attention. In Proc. ACL. 2022. PDF BibTeX
Brian DuSell and David Chiang. Learning hierarchical structures with differentiable nondeterministic stacks. In Proc. ICLR. 2022. PDF BibTeX

All papers

Language and Computation at Notre Dame