File size: 14,998 Bytes
06c03b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
<!--
@license
Copyright 2020 Google. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

<!DOCTYPE html>

<html>
<head>
	<meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">

  <link rel="apple-touch-icon" sizes="180x180" href="https://pair.withgoogle.com/images/favicon/apple-touch-icon.png">
  <link rel="icon" type="image/png" sizes="32x32" href="https://pair.withgoogle.com/images/favicon/favicon-32x32.png">
  <link rel="icon" type="image/png" sizes="16x16" href="https://pair.withgoogle.com/images/favicon/favicon-16x16.png">
  <link rel="mask-icon" href="https://pair.withgoogle.com/images/favicon/safari-pinned-tab.svg" color="#00695c">
  <link rel="shortcut icon" href="https://pair.withgoogle.com/images/favicon.ico">

  <script>
    !(function(){
      var url = window.location.href
      if (url.split('#')[0].split('?')[0].slice(-1) != '/' && !url.includes('.html')) window.location = url + '/'
    })()
  </script>

  <title>Are Model Predictions Probabilities?</title>
  <meta property="og:title" content="Are Model Predictions Probabilities?">
  <meta property="og:url" content="https://pair.withgoogle.com/explorables/uncertainty-calibration/">

  <meta name="og:description" content="Machine learning models express their uncertainty as model scores, but through calibration we can transform these scores into probabilities for more effective decision making.">
  <meta property="og:image" content="https://pair.withgoogle.com/explorables/images/uncertainty-calibration.png">
  <meta name="twitter:card" content="summary_large_image">
  
	<link rel="stylesheet" type="text/css" href="../style.css">

  <link href='https://fonts.googleapis.com/css?family=Roboto+Slab:400,500,700|Roboto:700,500,300' rel='stylesheet' type='text/css'>  
  <link href="https://fonts.googleapis.com/css?family=Google+Sans:400,500,700" rel="stylesheet">

	<meta name="viewport" content="width=device-width">
</head>
<body>
  <div class='header'>
    <div class='header-left'>
      <a href='https://pair.withgoogle.com/'>
        <img src='../images/pair-logo.svg' style='width: 100px'></img>
      </a>
      <a href='../'>Explorables</a> 
    </div>
  </div>
  
  <h1 class='headline'>Are Model Predictions Probabilities?</h1>
  
  <div id='container'>
<div id='graph'></div>
<div id='sections'>

<div>

If a machine learning model tells you that it’s going to rain tomorrow with a score of 0.60, should you buy an umbrella?<a class='footstart'>1</a> 

<p> In the diagram, we have a hypothetical machine learning classifier for predicting rainy days. For each date, the classifier reads in relevant signals like temperature and humidity and spits out a number between 0 and 1. Each data point represents a different day, with the position representing the model’s prediction for rain that day and the symbol (🌧️ or ☀️) representing the true weather that occurred that day. 

<p> <div id='card'> Do the model’s predictions tell us the probability of rain?</div>

<p> In general, machine learning classifiers don’t just give binary predictions, but instead provide some numerical value between 0 and 1 for their predictions. This number, sometimes called the <em>model score</em> or <em>confidence</em>, is a way for the model to express their certainty about what class the input data belongs to. In most applications, the exact score is ignored and we use a threshold to round the score to a binary answer, yes or no, rain or not. However, by using <em>calibration</em> we can transform these scores into probabilities and use them more effectively in decision making.

</div>

<div> <h3>Thresholding</h3>

<p> One traditional approach to using a model’s score is through <span class='highlight'><em>thresholding</em></span>. In this setting, you choose a threshold <em>t</em> and then declare that the model thinks it’s going to rain if the score is above <em>t</em> and it’s not if the score is below, thereby converting the score to a binary outcome. When you observe the actual weather, you know how often it was wrong and can compute key aggregate statistics like <a href="https://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification" target="_blank"><em>accuracy</em></a>.

<p> We can sometimes treat these aggregate statistics themselves as probabilities. For example, accuracy is the probability that the binary prediction of your model (rain or not) is equal to the ground truth (🌧️ or ☀️). 
</div>

<div> <h3>Adjustable Thresholding</h3>

<p>The threshold can easily be changed after the model is trained.

<p> Thresholding uses the model’s score to make a decision, but fails to consider the model’s confidence. The model score is only used to decide whether you are above or below the threshold, but the magnitude of the difference isn’t considered. For example, if you threshold at 0.4, the model’s predictions of 0.6 and 0.9 are treated the same, even though the model is much more confident in the latter.

<div id='card'> Can we do a better job of incorporating the model score into our understanding of the model? </div>

</div>

<div> <h3>Calibration</h3>

<p> <span class='highlight'><em>Calibration</em></span> lets us compare our model scores directly to probabilities. 

<p> For this technique, instead of one threshold, we have many, which we use to split the predictions into buckets. Again, once we observe the ground truth, we can see what proportion of the predictions in each bucket were rainy days (🌧️). This proportion is the <em>empirical probability</em> of rain for that bucket.

<p> Ideally, we want this proportion to be higher for higher buckets, so that the probability is roughly in line with the average prediction for that bucket. We call the difference between the proportion and the predicted rates the calibration error, and by averaging over all of the buckets, we can calculate the <a href="https://arxiv.org/pdf/1706.04599.pdf" target="_blank">Expected Calibration Error</a>. If the proportions and the predictions line up for our use case, meaning the error is low, then we say the model is “well-calibrated” and we can consider treating the model score as the probability that it will actually rain.
</div>

<div> <h3>Adjusting Calibration</h3>

<p> We saw above that a well-calibrated model allows us to treat our model score as a kind of probability. But if we start with a poorly calibrated model, one which is over or under-confident. Is there anything we can do to improve it?

<p> It turns out that, in many settings, we can adjust the model score without really changing the model’s decisions, as long as our adjustment preserves the order of the scores<a class='footstart'>2</a>. For example, if we map all of the scores from our original model to their squares, we don’t change the order of the data with respect to the model score. Thus, quantities like accuracy will stay the same as long as we appropriately map the threshold to its square as well. However, these adjustments <em>do</em> change the calibration of a model by changing which data points lie in which buckets.

<div id='card'> <strong>Try</strong> <strong>tweaking the thresholds</strong> to <em>calibrate</em> the model scores for our data<a class='footstart'>3</a> – how much can you improve the model’s calibration? </div>

<p> In general, we don’t have to rely on tweaking the model scores by hand to improve calibration. If we are trying to calibrate the model for a particular data distribution, we can use mathematical techniques like <a href="https://en.wikipedia.org/wiki/Isotonic_regression" target="_blank">Isotonic Regression</a> or <a href="https://en.wikipedia.org/wiki/Platt_scaling" target="_blank">Platt Scaling</a> to generate the correct remapping for model scores.
</div>

<div> <h3>Shifting Data</h3>

<p> While good calibration is an important property for a model’s scores to be interpreted as probabilities, it alone does not capture all aspects of model uncertainty.

<p> What happens if it starts to rain less frequently after we’ve trained and calibrated our model? Notice how the calibration drops, even if we use the same calibrated model scores as before.

<p> Models are usually only well calibrated with respect to certain data distributions. If the data changes significantly between training and serving time, our models might cease to be well calibrated and we can’t rely on using our model scores as probabilities.
</div>

<div><h3>Beyond Calibration</h3>

<p> Calibration can sometimes be easy to game. For example, if we knew that it rains 50% of the time over the course of the year, then we could create a model with a constant prediction of 0.5 every day. This would have perfect calibration, despite not being a very useful model for distinguishing day-to-day differences in the probability of rain. This highlights an important issue: 

<div id='card'> Better calibration doesn’t mean more accurate predictions. </div> 

<p> It turns out that statisticians identified the issue with focusing solely on calibration in meteorology when comparing weather forecasts, and came up with a solution.  <a href="https://sites.stat.washington.edu/raftery/Research/PDF/Gneiting2007jasa.pdf" target="_blank">Proper scoring rules</a>  provide an alternative approach to measuring the quality of probabilistic forecasts, by using a formula to measure the distance between the model’s predictions and the true event probabilities. These rules guarantee that a better value must mean a better prediction in terms of accuracy and calibration. Such rules incentivize models to be both better calibrated and more accurate.<br>
</div>
</div>
</div>


<h3> More Reading </h3>

<p> This post is only the beginning of the discussion on the connections between machine learning models, probability, and uncertainty. In practice, when developing machine learning models with uncertainty in mind, we may need to go beyond calibration. 

<p> In some settings, errors are not all equal. For example, if we are training a classifier to predict if a patient needs to be tested for a disease, then a false negative (missing a case of the disease) may be <a href="https://pair.withgoogle.com/explorables/measuring-fairness/" target="_blank">more detrimental</a> than a false positive (accidentally having a patient tested). In such cases, we may not want a perfectly calibrated model, but may want to skew the model scores towards one class or another. The field of <a href="https://books.google.ca/books?hl=en&lr=&id=1CDaBwAAQBAJ&oi=fnd&pg=PA1&dq=Statistical+Decision+Theory&ots=LMuipfYL0J&sig=bSdHt0_Phot_wxieYXN7cvXvmII#v=onepage&q=Statistical%20Decision%20Theory&f=false" target="_blank">Statistical Decision Theory</a> provides us with tools to determine how to better use model scores in this more general setting. Calibration may also lead to tension with other important goals like <a href="https://proceedings.neurips.cc/paper/2017/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf" target="_blank">model fairness</a> in some applications.

<p> Beyond this, so far we’ve only considered the case of using a single model score, i.e. a point estimate. If we trained the model a thousand times with different random seeds, or resampled the training data, we would almost certainly generate a collection of different model scores for a given input. To truly unpack the different sources of uncertainty that we might encounter, we might want to look towards <em>distributional</em> approaches to measuring uncertainty, using techniques like <a href="https://proceedings.neurips.cc/paper/2017/file/9ef2ed4b7fd2c810847ffa5fa85bce38-Paper.pdf" target="_blank">Deep Ensembles</a> or <a href="https://authors.library.caltech.edu/13793/1/MACnc92b.pdf" target="_blank">Bayesian modeling</a>. We will dig deeper into these in future posts.

<h3> Credits </h3>

<p> Nithum Thain, Adam Pearce, Jasper Snoek &amp; Mahima Pushkarna // March 2022

<p> Thanks to Balaji Lakshminarayanan, Emily Reif, Lucas Dixon, Martin Wattenberg, Fernanda Viégas, Ian Kivlichan, Nicole Mitchell, and Meredith Morris for their help with this piece.

<h3> Footnotes </h3>

<p> <a class='footend'></a> Your decision might depend both on the probability of rain and its severity (i.e. how much rain there is going to be). We’ll focus just on the probability for now.

<p> <a class='footend'></a> Applying a strictly <a href="https://en.wikipedia.org/wiki/Monotonic_function" target="_blank">monotonic function</a> to the model always keeps the order of scores the same. 

<p> <a class='footend'></a> In this example, we adjust the model scores by changing the model scores of elements within a bucket to the mean of the bucket.<br><h3> More Explorables </h3>

<p id='recirc'></p>





<p><link rel="stylesheet" href="graph-scroll.css"></p>
<script src='../third_party/d3_.js'></script>

<p><link rel='stylesheet' href='footnote.css'></p>
<script src='footnote.js'></script>

<script src='generate_data.js'></script>

<script src='util.js'></script>
<script src='weatherdata.js'></script>
<script src='draw_calibrationcurve.js'></script>
<script src='draw_model_remapping.js'></script>
<script src='draw_weathergraph.js'></script>
<script src='draw_slides.js'></script>
<script src='init.js'></script>

<p><script src='../third_party/recirc.js'></script></p>
<link rel="stylesheet" href="style.css">
</body>

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-138505774-1"></script>
<script>
  if (window.location.origin === 'https://pair.withgoogle.com'){
    window.dataLayer = window.dataLayer || [];
    function gtag(){dataLayer.push(arguments);}
    gtag('js', new Date());
    gtag('config', 'UA-138505774-1');
  }
</script>

<script>
  // Tweaks for displaying in an iframe
  if (window !== window.parent){
    
    // Open links in a new tab
    Array.from(document.querySelectorAll('a'))
      .forEach(e => {
        // skip anchor links
        if (e.href && e.href[0] == '#') return

        e.setAttribute('target', '_blank')
        e.setAttribute('rel', 'noopener noreferrer')
      })

    // Remove recirc h3
    Array.from(document.querySelectorAll('h3'))
      .forEach(e => {
        if (e.textContent != 'More Explorables') return

        e.parentNode.removeChild(e)
      })

    // Remove recirc container
    var recircEl = document.querySelector('#recirc')
    recircEl.parentNode.removeChild(recircEl)
  }
</script>

</html>