Lunch at 12:30pm, (virtual) talk at 1pm, in 148 Fitzpatrick
Title: Towards Useful AI Interpretability via Interactive AI Explanations
Abstract: Although the plethora of eXplainable AI (XAI) approaches are validated to faithfully reflect the model behavior, how humans can understand and further use AI explanations is still underexplored. We carried out human studies to investigate how useful AI explanations are for human understanding of AI models. Specifically, we designed a self-explaining model, LimitedInk, that allowed users to extract important words (i.e., “rationales”) at any target length in text classification tasks. We then asked human judges to predict the sentiment label solely based on the rationales, and found that explanations are not always helpful for humans to simulate model predictions. To gain insights into possible reasons that led to the above findings, we examined the gaps between the status quo of XAI techniques and real-world user demands. We surveyed over 200 NLP XAI papers and compared them with the XAI Question Bank. We found users need diverse XAI types to gain comprehensive views of how AI system works, whereas there is a lack of one-size-fits-all XAI techniques to cater to the diverse and dynamic human needs. In response, we present a conversational AI explanation prototype, ConvXAI, to mitigate these gaps and further facilitate useful AI explanations. We identify four design principles of the conversational XAI system and examine the prototype with user studies in scientific writing tasks.
Bio: Hua Shen is a fourth-year Ph.D. student at PennState University. She is working on human-centered Explainable AI within the NLP + HCI areas, advised by Dr. Ting-Hao “Kenneth” Huang from PennState and closely working with Dr. Sherry Tongshuang Wu from CMU. She also conducted speech processing-related research during her Amazon Alexa AI and Google AI internships. Her broad research interests lie in improving human-centered AI interpretability and fairness in speech and natural language processing fields. See more information at: https://hua-shen.org/.