Fairness Datasets

A survey on datasets for fairness-aware machine learning

As decision-making increasingly relies on Machine Learning (ML) and (big) data, the issue of fairness in data-driven Artificial Intelligence (AI) systems is receiving increasing attention from both research and industry. A large variety of fairness-aware machine learning solutions have been proposed which involve fairness-related interventions in the data, learning algorithms and/or model outputs. However, a vital part of proposing new approaches is evaluating them empirically on benchmark datasets that represent realistic and diverse settings. Therefore, in this paper, we overview real-world datasets used for fairness-aware machine learning. We focus on tabular data as the most common data representation for fairness-aware machine learning. We start our analysis by identifying relationships between the different attributes, particularly w.r.t. protected attributes and class attribute, using a Bayesian network. For a deeper understanding of bias in the datasets, we investigate the interesting relationships using exploratory analysis.

Dataset bias in diagnostic AI systems - Guidelines for dataset collection and usage

In the last few years, the FDA has begun to recognize De Novo pathways (new approval processes) for approving AI as medical devices. A major concern with this is that the review process does not adequately test for biases in these models. There are many ways in which biases can arise in data, including during data collection, training, and model deployment. In this paper, we adopt a framework for categorizing the types of bias in datasets in a fine-grained way, which enables informed, targeted interventions for each issue appropriately. From there, we propose policy recommendations to the FDA and NIH to promote the deployment of more equitable AI diagnostic systems.

NIPS-2017 On Fairness and Calibration

The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models. This has motivated a growing line of work on what it means for a classification procedure to be “fair.” In this paper, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets.

The Implicit Fairness Criterion of Unconstrained Learning

We clarify what fairness guarantees we can and cannot expect to follow from unconstrained machine learning. Specifically, we show that in many settings, unconstrained learning on its own implies group calibration, that is, the outcome variable is conditionally independent of group membership given the score. A lower bound confirms the optimality of our upper bound. Moreover, we prove that as the excess risk of the learned score decreases, the more strongly it violates separation and independence, two other standard fairness criteria. Our results challenge the view that group calibration necessitates an active intervention, suggesting that often we ought to think of it as a byproduct of unconstrained machine learning.

The Impossibility Theorem of Machine Fairness a Causal Perspective

With the increasing pervasive use of machine learning in social and economic settings, there has been an interest in the notion of machine bias in the AI community. Models trained on historic data reflect biases that exist in society and propagated them to the future through their decisions. There are three prominent metrics of machine fairness used in the community, and it has been shown statistically that it is impossible to satisfy them all at the same time. This has led to an ambiguity with regards to the definition of fairness. In this report, a causal perspective to the impossibility theorem of fairness is presented along with a causal goal for machine fairness.

We pledge 1% of our equity and staff time to diversifying tech and leadership representation