Title : Computing fairness
link : Computing fairness
Computing fairness
One of the areas in which computer science and economics touch each other more than a bit is in understanding what constitutes a fair way of allocating scarce resources. This was the subject of a recent Northwestern CS workshop:
Quarterly Theory Workshop: Algorithmic Fairness
"Synopsis: As algorithmic systems have increasingly been deployed to make consequential decisions, it is becoming urgent that we grapple with the problem of (un)fairness and discrimination. These are delicate issues — to think rigorously about them, we first need to figure out how to formally define what we want, and then reason about how we might go about achieving our goals algorithmically — and what tradeoffs we will have to manage. This workshop focuses on recent advances in the theory of algorithmic fairness: both on foundational definitional questions, and on algorithmic techniques. The speakers are Nicole Immorlica (MSR), Jon Kleinberg (Cornell), Omer Reingold (Stanford), and Aaron Roth (U. of Pennsylvania).
The technical program of this workshop is organized by Aaron Roth and Jason Hartline."
These were the talks:
Quarterly Theory Workshop: Algorithmic Fairness
"Synopsis: As algorithmic systems have increasingly been deployed to make consequential decisions, it is becoming urgent that we grapple with the problem of (un)fairness and discrimination. These are delicate issues — to think rigorously about them, we first need to figure out how to formally define what we want, and then reason about how we might go about achieving our goals algorithmically — and what tradeoffs we will have to manage. This workshop focuses on recent advances in the theory of algorithmic fairness: both on foundational definitional questions, and on algorithmic techniques. The speakers are Nicole Immorlica (MSR), Jon Kleinberg (Cornell), Omer Reingold (Stanford), and Aaron Roth (U. of Pennsylvania).
The technical program of this workshop is organized by Aaron Roth and Jason Hartline."
These were the talks:
Speaker: Jon Kleinberg
Title: Algorithmic Fairness Criteria for Evaluating Individuals and Sets
Abstract: Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research establishing inherent trade-offs between these conditions. We also consider a variety of methods for promoting fairness and related notions for classification and selection problems that involve sets rather than just individuals.
Title: Algorithmic Fairness Criteria for Evaluating Individuals and Sets
Abstract: Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research establishing inherent trade-offs between these conditions. We also consider a variety of methods for promoting fairness and related notions for classification and selection problems that involve sets rather than just individuals.
This talk is based on joint work with Sendhil Mullainathan, Manish Raghavan, and Maithra Raghu.
Speaker: Aaron Roth
Title: Preventing Fairness Gerrymandering in Machine Learning
Title: Preventing Fairness Gerrymandering in Machine Learning
Abstract: The most prevalent notions of fairness in machine learning are statistical notions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). Individual notions of fairness avoid this problem, but at the cost of needing to make often unrealistic assumptions about the setting. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. We will show how to achieve this in polynomial time, under the assumption that we are given oracles for solving the unconstrained learning problems (for both the set of classifiers to be learned, and the set of subgroups to be protected). Our algorithm casts the problem as solving for an equilibrium in a zero sum game, and observes that learning oracles are enough to efficiently play no-regret learning dynamics. We then demonstrate experimentally that our proposed algorithms are practical, by investigating their performance on several real datasets when instantiated with learning heuristics in place of oracles.
This talk is based on joint work with Michael Kearns, Seth Neel, and Steven Wu.
Speaker: Omer Reingold
Title: On Algorithmic Fairness Between Groups and Individuals
Abstract: As algorithms increasingly inform and influence decisions made about individuals, it is increasingly important to address concerns that these algorithms might be discriminatory. Historically, definitions of fairness fell into one of two extremes: (1) broad-strokes statistical guarantees; (2) individual-level protections. Statistical guarantees tend to be much easier to satisfy (information and complexity theoretically), but tend to be much weaker in the protections they provide. We will discuss two recent works towards bridging the gap between statistical and individual protections. These works provide efficient learning algorithms that also ensure every subpopulation within some rich class is protected (according to some fairness notion).
Title: On Algorithmic Fairness Between Groups and Individuals
Abstract: As algorithms increasingly inform and influence decisions made about individuals, it is increasingly important to address concerns that these algorithms might be discriminatory. Historically, definitions of fairness fell into one of two extremes: (1) broad-strokes statistical guarantees; (2) individual-level protections. Statistical guarantees tend to be much easier to satisfy (information and complexity theoretically), but tend to be much weaker in the protections they provide. We will discuss two recent works towards bridging the gap between statistical and individual protections. These works provide efficient learning algorithms that also ensure every subpopulation within some rich class is protected (according to some fairness notion).
One of the notions we will discuss is that of multicalibration — a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data. The second notion studies fair classification within the versatile framework of Dwork et al. [ITCS 2012], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike previous works on metric-based fairness, we study the setting where a learning algorithm does not know the entire metric, but can query similarities between particular pairs of individuals a bounded number of times. We note that we do not make any assumption on the metric and are still able to obtain meaningful protection from systemic discrimination that we refer to as “metric multifairness.”
The talk will be focused on the various ways in which algorithmic fairness can be defined but will also elude to some of the ways in which it can be obtained. It is based on joint works with Úrsula Hébert-Johnson, Michael P. Kim and Guy Rothblum.
Speaker: Nicole Immorlica
Title: Fair Classification and Learning
Abstract: Classification, learning, and optimization algorithms are increasingly used to make decisions of social consequence. An automated resume reader classifies applicants, deciding which are worth interviewing. A learning algorithm estimates the impact of a drug on health outcomes. A facility location algorithm chooses where to locate new warehouses. For these algorithms to be useful, practitioners strive to maximize their accuracy and optimality. However, this can result in differential treatment across population subgroups, resulting in a sense of inequity. Fairness metrics are used to measure the degree of inequity. In this talk, we explore how to jointly optimize fairness and accuracy using, as a black box, existing algorithms that optimize accuracy or optimality. Our first solution is tailored to classification tasks and proposes decoupling the classification task, using different classifiers for each subgroup. Our second solution can optimize any smooth and continuous function of accuracy and fairness in classification, learning, and optimization settings. It further has the advantage that subgroup membership, while used to train, is not used at run time. This additional power comes with the cost of added computational complexity in training.
Title: Fair Classification and Learning
Abstract: Classification, learning, and optimization algorithms are increasingly used to make decisions of social consequence. An automated resume reader classifies applicants, deciding which are worth interviewing. A learning algorithm estimates the impact of a drug on health outcomes. A facility location algorithm chooses where to locate new warehouses. For these algorithms to be useful, practitioners strive to maximize their accuracy and optimality. However, this can result in differential treatment across population subgroups, resulting in a sense of inequity. Fairness metrics are used to measure the degree of inequity. In this talk, we explore how to jointly optimize fairness and accuracy using, as a black box, existing algorithms that optimize accuracy or optimality. Our first solution is tailored to classification tasks and proposes decoupling the classification task, using different classifiers for each subgroup. Our second solution can optimize any smooth and continuous function of accuracy and fairness in classification, learning, and optimization settings. It further has the advantage that subgroup membership, while used to train, is not used at run time. This additional power comes with the cost of added computational complexity in training.
Thus Article Computing fairness
That's an article Computing fairness This time, hopefully can give benefits to all of you. well, see you in posting other articles.
You are now reading the article Computing fairness with the link address https://notsomeimportant.blogspot.com/2018/06/computing-fairness.html
0 Response to "Computing fairness"
Post a Comment