Spaces:
Running
Running
File size: 11,959 Bytes
7341022 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
<!--
@license
Copyright 2020 Google. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="apple-touch-icon" sizes="180x180" href="https://pair.withgoogle.com/images/favicon/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="https://pair.withgoogle.com/images/favicon/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="https://pair.withgoogle.com/images/favicon/favicon-16x16.png">
<link rel="mask-icon" href="https://pair.withgoogle.com/images/favicon/safari-pinned-tab.svg" color="#00695c">
<link rel="shortcut icon" href="https://pair.withgoogle.com/images/favicon.ico">
<script>
!(function(){
var url = window.location.href
if (url.split('#')[0].split('?')[0].slice(-1) != '/' && !url.includes('.html')) window.location = url + '/'
})()
</script>
<title>Measuring Diversity</title>
<meta property="og:title" content="Measuring Diversity">
<meta property="og:url" content="https://pair.withgoogle.com/explorables/measuring-diversity/">
<meta name="og:description" content="Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.">
<meta property="og:image" content="https://pair.withgoogle.com/explorables/images/measuring-diversity.png">
<meta name="twitter:card" content="summary_large_image">
<link rel="stylesheet" type="text/css" href="../style.css">
<link href='https://fonts.googleapis.com/css?family=Roboto+Slab:400,500,700|Roboto:700,500,300' rel='stylesheet' type='text/css'>
<link href="https://fonts.googleapis.com/css?family=Google+Sans:400,500,700" rel="stylesheet">
<meta name="viewport" content="width=device-width">
</head>
<body>
<div class='header'>
<div class='header-left'>
<a href='https://pair.withgoogle.com/'>
<img src='../images/pair-logo.svg' style='width: 100px'></img>
</a>
<a href='../'>Explorables</a>
</div>
</div>
<h1 class='headline'>Measuring Diversity</h1>
<div class="post-summary">Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.</div>
<link rel="stylesheet" href="style.css">
<p>Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a <a href="https://www.nytimes.com/interactive/2018/04/24/upshot/women-and-men-named-john.html">page of white men</a>, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels. </p>
<p>Using the careful quantification outlined in a recent paper, <a href="https://arxiv.org/pdf/2002.03256.pdf">Diversity and Inclusion Metrics in Subset Selection</a>, we can quantify biases and push these systems to return a wider range of results. </p>
<p>The mathematics of all this is a little easier to follow with abstract shapes. Let’s take a look at some of them:</p>
<div id='all-shapes' class='shapes'></div>
<p>Suppose we want to return about <b>30% green boxes</b> to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return?</p>
<div id='pick-green' class='shapes'></div>
<p>Another diversity metric we care about is the percentage of dots… how close to <b>35% dots</b> can you get?</p>
<div id='pick-triangle' class='shapes'></div>
<p>If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn’t possible to reduce the difference of every metric to zero. One natural approach: find the selection with the <strong>lowest mean difference</strong> across all the metrics to get as close as possible to all the targets. </p>
<p>In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the <strong>lowest max difference</strong>. Try minimizing both below: </p>
<div id='pick-metric' class='shapes' style='margin-bottom: 0px'></div>
<p>Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results? </p>
<h3 id="ranking-measures">Ranking Measures</h3>
<p>We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set’s percentage of green, dots and small shapes are shown in the small histograms. </p>
<div id='columns-height'></div>
<p>At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets.</p>
<div id='columns-height-disagree'></div>
<p>Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for <a href="https://en.wikipedia.org/wiki/Intersectionality">intersectionality</a>. The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It’s important to keep in mind what exactly you’re trying to maximize and the dataset that you’re operating on.</p>
<h3 id="which-measure-is-best-">Which Measure is Best?</h3>
<p>In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context.</p>
<p>For example, the doctors on the left have more variance along the shirt color attribute, but they’re less diverse by gender than the doctors on the right. With the shirt color and gender targets we’ve picked, the two subsets have the same mean and max differences However, in most applications, it’s more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color. </p>
<div id='coat-v-gender'></div>
<p>Just selecting a diverse sample isn’t sufficient either. <a href="https://arxiv.org/pdf/2002.03256.pdf">Diversity and Inclusion Metrics in Subset Selection</a> introduces a way of measuring “inclusion” - how well does the searcher feel represented in the results?</p>
<p>Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive.</p>
<div id='construction'></div>
<p>The context of the query and the searcher also plays in the quality of search results. A search for “work clothing” that shows a mixed palette of colors for men’s clothing and only pink women’s clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women’s clothes might be appropriate to show for a “pink women work clothes” search or if the searcher had previously expressed a preference for pink.</p>
<p>We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems. </p>
<h3 id="more-reading">More Reading</h3>
<p>The <a href="https://arxiv.org/pdf/2002.03256.pdf">Diversity and Inclusion Metrics</a> paper has a <a href="https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/source/measuring-diversity/diversity-and-inclusion.ipynb">Colab</a> with a detailed desciption of the metrics, additional visualizations and a reference Python implementation. </p>
<p>The difficulties of <a href="https://pair.withgoogle.com/explorables/measuring-fairness/">measuring fairness</a> in general have been well studied; subset selection is still an active area of research. <a href="https://www.cs.cornell.edu/~tj/publications/singh_joachims_18a.pdf">Fairness of Exposure in Rankings</a> proposes a ranking algorithm that incorporates fairness constraints. <a href="https://www.ilab.cs.rutgers.edu/~rg522/publication/gao-2020-ipm/gao-2020-ipm.pdf">Toward creating a fairer ranking in search engine results</a> measures diversity bias in actual search results. </p>
<p>Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the <a href="https://pair.withgoogle.com/chapter/feedback-controls/">People + AI Guidebook</a>.</p>
<h3 id="credits">Credits</h3>
<p>Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell* and Timnit Gebru* // March 2021</p>
<p>*Work done while at Google</p>
<p>Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece.</p>
<h3>More Explorables</h3>
<p id='recirc'></p>
<script src='../third_party/d3_.js'></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/seedrandom/3.0.5/seedrandom.min.js">
</script>
<script src='sliders.js'></script>
<script src='script.js'></script>
<script src='image-layout.js'></script>
<script src='columns-height.js'></script>
<script src='../third_party/recirc.js'></script>
</body>
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-138505774-1"></script>
<script>
if (window.location.origin === 'https://pair.withgoogle.com'){
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-138505774-1');
}
</script>
<script>
// Tweaks for displaying in an iframe
if (window !== window.parent){
// Open links in a new tab
Array.from(document.querySelectorAll('a'))
.forEach(e => {
// skip anchor links
if (e.href && e.href[0] == '#') return
e.setAttribute('target', '_blank')
e.setAttribute('rel', 'noopener noreferrer')
})
// Remove recirc h3
Array.from(document.querySelectorAll('h3'))
.forEach(e => {
if (e.textContent != 'More Explorables') return
e.parentNode.removeChild(e)
})
// Remove recirc container
var recircEl = document.querySelector('#recirc')
recircEl.parentNode.removeChild(recircEl)
}
</script>
</html> |