Pre-registration form: https://forms.gle/YBCwn7L8N5AxExMG7
virtual NeurIPS portal: https://neurips.cc/virtual/2023/workshop/66502
Fairness has been predominantly studied under the static regime, assuming an unchanging data generation process. However, these approaches neglect the dynamic interplay between algorithmic decisions and the individuals they impact, which have shown to be prevalent in practical settings. Such observation has highlighted the need to study the long term effect of fairness mitigation strategies and incorporate dynamic systems within the development of fair algorithms.
Despite prior research identifying several impactful scenarios where such dynamics can occur, including bureaucratic processes, social learning, recourse, and strategic behavior, extensive investigation of the long term effect of fairness methods remains limited. Initial studies have shown how enforcing static fairness constraints in dynamical systems can lead to unfair data distributions and may perpetuate or even amplify biases.
Additionally, the rise of powerful large generative models have brought at the forefront the need to understand fairness in evolving systems. The general capabilities and widespread use of these models raise the critical question of how to assess these models for fairness and mitigate observed biases within a long term perspective. Importantly, mainstream fairness frameworks have been developed around classification and prediction tasks. How can we reconcile these existing techniques (proprocessing, in-processing and post-processing) with the development of large generative models?
Given these interesting questions, this workshop aims to deeply investigate how to address fairness concerns in settings where learning occurs sequentially or in evolving environments.
Title: At the Intersection of Algorithmic Fairness and Causal Representation Learning
Title: Performativity and Power in Prediction
Title: Uncovering Hidden Bias: Auditing Language Models with a Social Stigma Lens
Title: A Framework for Responsible Deployment of Large Language Models
Deadlines:
Acceptance Notification: Oct 27, 2023 AoE
Deadline: Sep 29, 2023 AoE Oct 4, 2023
Acceptance Notification: Oct 27, 2023 AoE
Format (maximum one page pdf, references included).
Upload a 1-page pdf file on CMT. The pdf file should follow the one-column format, main body text must be minimum 11 point font size and page margins must be minimum 0.5 inches (all sides).