Lunch at 12:30pm, talk at 1pm, in 148 Fitzpatrick

Title: Explainable Decision Trees

Abstract: Decision trees offer a strong balance between speed, accuracy, and interpretability for tabular data. However, interpreting decision trees in practice is often difficult, as they require background knowledge in machine learning to understand. In this talk, I will discuss my work on exploring how large language models can be used to assist in decision tree explainability as well as how these explanations can be evaluated using auto-generated quiz questions.

Bio: Noah Ziems is a second year PhD student in the Department of Computer Science and Engineering at the University of Notre Dame, and is a member of Dr. Meng Jiang’s DM2 lab. His research is focused on question answering, large language models, and information retrieval.