A NeurIPS 2024 Workshop
Algorithmic Fairness through the Lens of Metrics and Evaluation
December 14, 2024
Pre-registration form: https://forms.gle/YBCwn7L8N5AxExMG7
Submission portal: https://cmt3.research.microsoft.com/AFME2024
.
The Algorithmic Fairness through the Lens of Metrics and Evaluation (AFME) workshop aims to spark discussions on revisiting algorithmic fairness metrics and evaluation in light of advances in large language models and international regulation.
The discussion on defining and measuring algorithmic (un)fairness has predominantly been a focus in the early stages of algorithmic fairness research resulting in four main fairness denominations: individual or group, statistical or causal, equalizing or nonequalizing, and temporal or non-temporal fairness. Since, much work in the field had been dedicated to providing methodological advances within each denomination and understanding various trade-offs between fairness metrics. However, given the changing machine learning landscape, with both increasing global applications and the emergence of large generative models, the question of understanding and defining what constitutes “fairness” in these systems has become paramount again.
On one hand, definitions of algorithmic fairness are being critically examined regarding the historical and cultural values they encode. The mathematical conceptualization of these definitions and their operationalization through satisfying statistical parities has also raised criticism of not taking into account the context within which these systems are deployed.
On another hand, it is still unclear how to reconcile standard fairness metrics and evaluations developed mainly for prediction and classification tasks with large generative models. While some works proposed adapting existing fairness metrics, e.g., to large language models, questions remain on how to systematically measure fairness for textual outputs, or even multi-modal generative models. Large generative models also pose new challenges to fairness evaluation with recent work showcasing how biases towards specific tokens in large language models can influence fairness assessments during evaluation. Finally, regulatory requirements introduce new challenges in defining, selecting, and assessing algorithmic fairness.
Given these critical and timely considerations, this workshop aims to investigate how to define and evaluate (un)fairness in today’s machine learning landscape.
Invited Speakers
Assistant Professor, Ethics and Computational Technologies, Carnegie Mellon University
Staff Research Scientist, Google DeepMind
Professor of Philosophy, Australian National University
Panellists
Assistant Professor, Ethics and Computational Technologies, Carnegie Mellon University
Staff Research Scientist, Google DeepMind
Professor of Philosophy, Australian National University
Call for Papers
Submission portal: CMT
Submissions to the Paper track should describe new projects aimed at studying algorithmic fairness in light of broad recent advances i.e. large generative models, international regulatory efforts, and new bias evaluation frameworks. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature. Authors of accepted papers have the option to upload a 3-min video presentation of their paper. All recorded talks will be made available on the workshop website.
We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):
- Failure modes of current fairness definitions
- Limitations of existing fairness definitions and evaluation techniques
- Proposals for contextual fairness
- Development of fairness metrics and evaluation techniques for large generative models
- Regulatory compliant use of metrics and evaluation of fairness
- Analysis of regulatory frameworks with respect to fairness metrics and/or evaluation.
- Red-teaming, alignment, safety and other topics related to algorithmic (un)fairness
We also accept broad work on fairness including:
- Fairness metrics and mitigation techniques
- Novel, application-specific formalizations of fairness
- Ethical considerations in deploying fair algorithms in recent contexts
- Methods to ensure fairness in the context of generative models
- Algorithmic approaches to capturing and mitigating evolving biases
- Methods in fairness related subfields (model multiplicity, interpretability, robustness, etc)
Deadlines:
Abstract: Sep 06, 2024 AoE
Full submission: Sep 12, 2024
Acceptance Notification: Oct 09, 2024 AoE
Format: 4-9 pages not including references and appendix. The impact statement or checklist are optional and do not count towards the page limit.
Dual-submission policy: we accept submissions of ongoing unpublished work as well as work submitted elsewhere (FAccT, ICLR, SaTML, etc), or substantial extensions of works presented at other venues (not in proceedings). We however do not accept work that has been previously accepted as a journal or conference proceedings (including the main NeurIPS conference). Work that is presented at the main NeurIPS conference should not appear in a workshop.
1 page (max, anonymized) in pdf format
Submission portal: CMT.
The extended abstract track welcomes submissions of 1-page abstracts (including references) that provide new perspectives, discussions or novel methods that are not yet finalized on the topics of fairness, fairness metrics, regulations of large generative models, and/ or fairness in generative models. Accepted abstracts will be presented as posters at the workshop.
Deadline: Sep 12, 2024 AOE
Acceptance Notification: Oct 09, 2024 AoE
Format (maximum one page pdf, references included).
Upload a 1-page pdf file on CMT. The pdf file should follow the one-column format, main body text must be minimum 11 point font size and page margins must be minimum 0.5 inches (all sides).
Organizers
(Meta)
Code of Conduct
The AFME workshop abides by the NeurIPS code of conduct. Participation in the event requires agreeing to the code of conduct.
Reviewer Volunteer Form
If you would like to help as a reviewer, please fill out the form below.
If you would like to help as a reviewer, please fill out the form below.