If someone ever asks you why you need ML monitoring, show them this picture π
Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign UpIf someone ever asks you why you need ML monitoring, show them this picture π
Yes, of course, I was actually gonna add the explanation as a comment, but I forgot π
The idea is that models have confident and less confident areas. The confidence is influenced by the characteristics and distribution of the training data.
In the example above, during testing, the model classifies all data points almost perfectly. And we observe only a small portion of them gathering in the center (the model's less confident area).
However, in production, more and more examples start coming from the conflicted region. A shift like that one will definitely translate into a performance drop.
So, you need monitoring to realize that the model might be underperforming.
The issue is that monitoring performance changes in production is hard because we rarely have ground truth there. The good news is that we could monitor the estimated performance instead!