Lunch at 12:30pm, talk at 1pm, in 148 Fitzpatrick

Abstract: While many types of hate speech and online toxicity have been the focus of extensive research in NLP, toxic language stigmatizing poor people has been mostly disregarded. Yet, aporophobia, a social bias against the poor, is a common phenomenon online, which can be psychologically damaging as well as hindering poverty reduction policy measures. We demonstrate that aporophobic attitudes are indeed present in social media and argue that the existing NLP datasets and models are inadequate to effectively address this problem. Efforts toward designing specialized resources and novel socio-technical mechanisms for confronting aporophobia are needed.

Bio: Georgina Rex is a Postdoctoral Fellow at the ND Technology Ethics Center and is involved with the Economically Sustainable AI for Good Project at the ND-IBM Tech Ethics Lab. She chairs the IJCAI Symposia in the Global South and Co-Chair the AI & Social Good Special Track at the International Joint Conference on Artificial Intelligence (IJCAI’23). She has been a Visiting Scholar at the Kavli Center for Ethics, Science and the Public (UC Berkeley). Focusing on issues of poverty mitigation, fairness and inclusion, Georgina works on the design of AI socio-technical systems that provide new insights to counteract inequality, and more broadly, to advance interdisciplinary research towards the achievement of the UN Sustainable Development Goals (SDGs). She conducts research that contributes to the AI state of the art in Natural Language Processing, Agent-Based Modeling, Social Networks and Machine Learning, with the ultimate goal to offer insights for innovative interventions to local and global challenges