Pre-registration form: https://forms.gle/YBCwn7L8N5AxExMG7
The discussion on defining and measuring algorithmic (un)fairness has predominantly been a focus in the early stages of algorithmic fairness research resulting in four main fairness denominations: individual or group, statistical or causal, equalizing or nonequalizing, and temporal or non-temporal fairness. Since, much work in the field had been dedicated to providing methodological advances within each denomination and understanding various trade-offs between fairness metrics. However, given the changing machine learning landscape, with both increasing global applications and the emergence of large generative models, the question of understanding and defining what constitutes “fairness” in these systems has become paramount again.
On one hand, definitions of algorithmic fairness are being critically examined regarding the historical and cultural values they encode. The mathematical conceptualization of these definitions and their operationalization through satisfying statistical parities has also raised criticism of not taking into account the context within which these systems are deployed.
On another hand, it is still unclear how to reconcile standard fairness metrics and evaluations developed mainly for prediction and classification tasks with large generative models. While some works proposed adapting existing fairness metrics, e.g., to large language models, questions remain on how to systematically measure fairness for textual outputs, or even multi-modal generative models. Large generative models also pose new challenges to fairness evaluation with recent work showcasing how biases towards specific tokens in large language models can influence fairness assessments during evaluation. Finally, regulatory requirements introduce new challenges in defining, selecting, and assessing algorithmic fairness.
Given these critical and timely considerations, this workshop aims to investigate how to define and evaluate (un)fairness in today’s machine learning landscape.
Title: Harm Detectors and Guardian Models for LLMs: Implementations, Uses, and Limitations
Title: Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Title: Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Title: Reflections on Fairness Measurement: From Predictive to Generative AI
Title: Evaluating the Ethical Competence of LLMs
Submission portal: CMT
Deadlines:
Acceptance Notification: Oct 09, 2024 AoE
Submission portal: CMT.
Deadline: Sep 12, 2024 AOE
Acceptance Notification: Oct 09, 2024 AoE
Format (maximum one page pdf, references included).
Upload a 1-page pdf file on CMT. The pdf file should follow the one-column format, main body text must be minimum 11 point font size and page margins must be minimum 0.5 inches (all sides).