The BIAS project attends the summer school on ‘Law and Language’ at Pavia University
Carlotta Rigotti, Postdoc researcher at eLaw, and Eduard Fosch-Villaronga, Associate Professor at eLaw, delivered a lecture on AI and non-discrimination, engaging students with the Debiaser demo.
The eLaw Center for Law and Digital Technologies at Leiden University recently took part in the summer school on ‘Law and Language,’ co-organised by the University of Pavia, Würzburg University, Poitiers University, and Pázmány Péter Catholic University. Held in Pavia (Italy) from 16-20 September, the summer school brought together experts and students to explore the intersections of law and language. Topics ranged from developing English legal skills and understanding language rules within EU institutions to examining AI as a new ‘language’ that the law must adapt to and regulate.
Representing the BIAS project which aims to mitigate diversity biases of AI systems in the labour market, Carlotta Rigotti and Eduard Fosch-Villaronga lectured over two days on AI and non-discrimination during the summer school.
The second day focused on the legal and ethical requirements laid out by the High-Level Expert Group on AI (AI HLEG) for achieving trustworthy AI. After discussing their potentially diverse interpretations, Carlotta and Eduard introduced the BIAS project and its Debiaser demo, inviting students to actively engage with this tool designed to help the user rank and motivate job applicants applying for a certain vacancy. In this interactive exercise, students were divided into groups and asked to role-play as human resources (HR) personnel in a fictitious hiring scenario. They performed candidate rankings both manually and with the help of the Debiaser demo, allowing them to reflect on diversity biases and ethical considerations like transparency and autonomy in both approaches. The lecture concluded with a general discussion on the opportunities and limitations of AI in the labour market.
The BIAS Project: Get involved!
The BIAS project aims to identify and mitigate diversity biases (e.g. related to gender and race) of artificial intelligence (AI) applications in the labour market, especially in human resources (HR) management.
To gain new and consolidated knowledge about diversity biases and fairness in AI and HR, the BIAS Consortium is currently involved in several activities that you might be interested in discovering and joining, like capacity-building sessions and ethnographic fieldwork.
If you want to stay tuned about all our activities and/or participate in our project in different capacities, please join these national communities of stakeholders coming from different ecosystems: HR officers, AI developers, scholars, policymakers, representatives of trade unions, workers, and civil society organisations.
To join the national labs, click on this link.