Virtual NeurIPS 2021 Workshop

Algorithmic Fairness through the Lens of Causality and Robustness

December 13, 2021, 09:00 AM GMT - 08:30 PM GMT

Pre-registration: https://forms.gle/57o3SqV7t7edtW667

Virtual Website: https://neurips.cc/virtual/2021/workshop/21850

Information Booklet : https://drive.google.com/file/d/1Bbq2DAnScbUufcwvxwkqCtF1lvc5TD1s/view

The Algorithmic Fairness through the Lens of Causality and Robustness (AFCR) workshop aims to spark discussions on how open questions in algorithmic fairness can be addressed with Causality and Robustness.

Recently, relationships between techniques and metrics used across different fields of trustworthy ML have emerged, leading to interesting work at the intersection of algorithmic fairness, robustness, and causality.

On one hand, causality has been proposed as a powerful tool to address the limitations of initial statistical definitions of fairness. However, questions have emerged regarding 1) the applicability of such approaches due to strong assumptions inherent to causal questions and 2) the suitability of a causal framing for studies of bias and discrimination.

On the other hand, the Robustness literature has surfaced promising approaches to improve fairness in ML models. For instance, parallels can be shown between individual fairness and local robustness guarantees or between group fairness metrics and robustness to distribution shift. Beyond similarities, the interactions between fairness and robustness can help us understand how fairness guarantees hold under distribution shift or adversarial/poisoning attacks, leading to fair and robust ML models.

After a first edition of this workshop that focused on causality and interpretability, we will turn to the intersectionality between algorithmic fairness and recent techniques in causality and robustness. In this context, we will investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness.

Elias Bareinbiom

Associate Professor in the Department of Computer Science and Director of the Causal Artificial Intelligence Lab at Columbia University

Silvia Chiappa

Senior Staff Research Scientist in Machine Learning at DeepMind and Honorary Professor at the Computer Science Department of University College London.

Isabel Valera

Professor of Machine Learning at the Department of Computer Science at Saarland University and Adjunct Faculty at the Max Planck Institute for Software Systems.

Hima Lakkaraju

Assistant Professor in the Business School and Department of Computer Science at Harvard University.

Rumi Chunara

Associate Professor of Computer Science & Engineering, Biostats and Epidemiology at New York University, Tandon School of Engineering.

Aditi Raghunathan

Posdoctoral researcher at UC Berkeley. Incoming Assistant Professor at Carnegie Mellon University.

Panelists

Been Kim (Google Brain)

Solon Barocas (Microsoft Research)

Ricardo Silva (UCL)

Rich Zemel (U. of Toronto)

Roundtable Leads

Causality for Fairness

Issa Kohler-Hausmann (Yale University)

Matt Kusner (UCL)

Maggie Makar (University of Michigan)

Ioana Bica (University of Oxford)

Robustness for Fairness

Silvia Chiappa (DeepMind)

Alex D’Amour (Google Research)

Elliot Creager (U. of Toronto)

General Fairness

Isabel Valera (Saarland University)

Ulrich AĂŻvodji (UQAM)

Keziah Naggita (TTIC)

Stephen Pfohl (Stanford)

Ethics

Luke Stark (U. of Western Ontario)

Irene Y. Chen (MIT)

Lizzie Kumar (Brown University)

Call for Papers

4-8 pages (anonymized), NeurIPS format, on CMT

Abstract deadline: September 13 extended to September 20,

Full submission: September 17 extended to September 24

Paper Submissions should describe new projects aimed at using Causality and/or Robustness to address fairness in machine learning. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature. Authors of accepted papers will be required to upload a 10-min video presentation of their paper. All recorded talks will be made available on the workshop website.

We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):

  • Failure modes of all current fairness definitions (statistical, causal, and otherwise)

  • New causal definitions of fairness

  • How can causally grounded fairness methods help develop more robust fairness algorithms in practice?

  • What is an appropriate causal framing in studies of discrimination?

  • How do approaches for adversarial/poisoning attacks target algorithmic fairness?

  • How do fairness guarantees hold under distribution shift?

One-page Extended Abstract Track

The extended abstract track welcome submissions of 1-page abstracts (including references) that provide new perspectives, discussions or novel methods that are not yet finalized on the topics of fairness, causality, and/ or robustness. Accepted abstracts will be presented as posters at the workshop.


Submission guidelines:

Fill the submission form and upload a 1-page pdf file. The pdf file should follow the one-column format, main body text must be minimum 11 point font size and page margins must be minimum 0.5 inches (all sides).


Acceptance notification:

The extended abstract track follows a rolling deadline and submissions will be accepted until November 30. Submissions will be reviewed on a case-by-case basis. Authors will be notified of a decision as soon as reviewing is completed.


Submission form: https://forms.gle/YriXN6d9v8gNTedeA

Organizers

Reviewer Volunteer Form

If you would like to help as a reviewer, please fill out the form below.