Abstract deadline: September 13, 2021
Full submission: September 17, 2021 (11:59 pm anywhere on earth)
Paper submissions (4 to 8 pages, not including references) should describe new projects aimed at using Causality and/or Robustness to address fairness in machine learning. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature.
We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):
Failure modes of all current fairness definitions (statistical, causal, and otherwise)
Methods to encode domain-specific fairness knowledge into causal models
New causal definitions of fairness
Studies of practical limitations of causally grounded fairness methods?
Trade-offs between fairness and robustness?
How do approaches for adversarial/poisoning attacks target algorithmic fairness?
How do fairness guarantees hold under distribution shift