Lunch at 12:30pm, (virtual) talk at 1pm, in 148 Fitzpatrick

Title: Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors

Abstract: Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range on NLP tasks, especially in the zero-shot setting. However, even advanced instruction-tuned LLMs still fail to outperform BERT-sized models on relation extraction (RE), a fundamental information extraction task. We hypothesize that instruction-tuning has been unable to elicit strong RE capabilities in LLMs due to RE’s low incidence in instruction-tuning datasets, making up less than 1% of all tasks (Wang et al., 2022). To address this limitation, we propose QA4RE, a framework that aligns RE with question answering (QA), a predominant task in instruction-tuning datasets. Comprehensive experiments on zero-shot RE over four datasets and three strong LLMs demonstrate that our QA4RE framework consistently improves LLM performance, empowering LLMs to outperform BERT-sized language models by a large margin for the first time. Additionally, we provide thorough experiments and discussions disentangling what aspects of QA4RE are responsible for its improved performance and what LLMs can benefit most from such alignment. This work illustrates a promising way of adapting LLMs to challenging and underrepresented tasks by aligning these tasks with more common instruction-tuning tasks like QA.

Bio: Kai Zhang is a first-year PhD student at the Ohio State advised by Prof. Yu Su. His research interests are natural language processing and its real-world applications, such as information extraction and question answering. He is currently focusing on large language models, knowledge, and their interplay. He has published multiple papers on NLP conferences such as ACL, EMNLP, and NAACL. He was a research intern at Tsinghua University NLP, Dartmouth College NLP, and Microsoft.